Library / In focus

Back to Library
Future of Life Institute PodcastCivilisational risk and strategy

ai vs cancer how ai can and can t cure cancer by emilia javorsky

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from Future of Life Institute Podcast. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 92 full-transcript segments: median 0 · mean -2 · spread -228 (p10–p90 -80) · 3% risk-forward, 97% mixed, 0% opportunity-forward slices.

Slice bands
92 slices · p10–p90 -80

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes safety
  • - Emphasizes ai safety
  • - Full transcript scored in 92 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation.

ai-safetyfli

Play on sAIfe Hands

On-site playback is enabled when an episode-level media URL is connected. This entry currently points to a source page.

This entry currently has a show-level source URL, not an episode-level media URL.

Episode transcript

YouTube captions (auto or uploaded) · video 9X1yA5YD-dU · stored Apr 8, 2026 · 2,344 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/ai-vs-cancer-how-ai-can-and-can-t-cure-cancer-by-emilia-javorsky.json when you have a listen-based summary.

Show full transcript
What are we aligning to? Right? Like we have a society of value pluralism. That would be really a shame to lose. sentient creatures care about very different things, but they are nevertheless enshrined in an civilizational architecture which both allows them to peacefully coexist when you know their goals like misalign but also it allows them and actually encourages them to cooperate on paro preferred possibilities rather than having like one entity trying to regulate and control AI development. Instead, what you have is like many different actors kind of like coming to a type of agreement of like which kind of capabilities are dangerous to develop. This agreement is then basically enforced multilaterally by this kind of like cryptographic monitoring fabric. I think if we're only ever focusing on the world that we're really trying to avoid. You can do that all day long, but if you never build the stuff that you do want, then there's just less and less even to strive for. And I think we're at that point now for civilization. like we need to level up and lean in. It's possible. Welcome to the Future of Life Institute podcast. My name is Gus Stalker and I'm here with Allison Dutman. Allison is the CEO of the Foresight Institute. Allison, welcome to the podcast. Thanks so much for having me. I'm really excited for the conversation. Fantastic. All right. You have a bunch of writing on two different paths to developing AI. one you could call the centralized path and one you could call the decentralized path. On this podcast, we've discussed a bunch of the risks of decentralized approach to to developing AI, but I think it would be worth it still to go into detail discussing the risks and benefits of centralization versus decentralization in AI development. Yeah, I'm game. Let's do it. Perfect. What do you see as the main dangers of this of a centralized approach? Well, I mean like first, you know, it's never as cookie cleancut as you make it out to be. You know, I think over time we're just like evolving in different patterns and it's very difficult to even distinguish centralized from decentralized. At what level are you um are you even looking at it? Like you know in a nutshell looking at it very crudely like a centralized path could mean both centralization in terms of you know like one company or like one actor or one AI system or one government gaining lots of control and power um through AI development. And on the decentralized path, it could mean anything from, you know, like multiple different actors to like a variety of different actors to everyone in civilization having really a stake in AI development. And so there's like it's a it's a spectrum of course always, but like saying very crudely like there's just a host of lists. I have this post where I really go like one bullet point by bullet point. But I think sometimes it's like really good to actually like lay out the tradeoffs and and compare them for both centralized and decentralized systems. So I guess you know a big one that one is worried about perhaps my backgrounds in philosophy from like an metaethical perspective is like the possibility of value lock in and the possibility of like an end of progress lock in to some extent. you know I think you know value lock and you know that's really ignorant of how values developed over the history of civilization. So if you think of you know how we have evolved as a society and how we've made progress over time it is by people trying out different things and by different pockets going off uh and developing and and developing new cultural norms etc and then like meshing it all together again. And I think ultimately if you have this one centralized actor that that that like stably controls the world pretty much in a very in a very concrete and you know radical example then I think that's that's quite ignorant of how we have evolved pluralistically. Yeah. So that's one thing that I'm very worried about uh just meta ethically and I think that's not something that like the AI community has very well grappled with and grasped with is just like what are we aligning to right like we have a society of value pluralism and we have a society that like rather than trying to make us all agree on a specific value set has mostly worked by creating architectures in which value pluralism can coexist and in which different entities can cooperate for mutual benefit even if they don't share even if they don't share our goals perfectly. And so that would be really a shame to lose and it's very unlikely that we'll get it right on the first try. Historically, we haven't really been good at it. So I think the first one is, you know, just the meta ethic concept. I don't know if you want to dig into it. I can rattle I can go down the list and just rattle a few more off. But no, this this is this one is is central I think and important. So if it's the case that say one company or one government is is the entity that arrives at AGI or perhaps even super intelligence first and if if there's only one of these systems then it seems that the values of that company of that government will might be able to assert power over the world for perhaps a long time and so it's is there I guess the central question then is is there a way to incorporate this this feedback and critique that we have in in our current culture uh the way we develop values and refine them over time. Can that be incorporated into an AI system or perhaps in an an approach taken by a government or a company? I I hope it's possible and I think that there are these like different paths of developing AI systems and one that you know a future I would like to see more draws on like a few notions of one is this notion of a parotopia which bear with me it's a a utopian it's a utopian outline here but it's basically a utopia of utopias and so you know you go from a civilization which different entities, including humans and possibly eventually AI systems and and perhaps eventually like other sentient creatures, care about very different things, but they are nevertheless kind of enshrined in an civilization architecture, which both allows them to peacefully coexist when you know their goals like misalign, but also they it allows them and actually encourages them to cooperate um on Parida preferred possibility. entities and so that doesn't always mean for mutual benefit but it at least means that at least one of these entities is better off incorporating than the other without having left the other one worse off and so over time you know this type of you know I guess like blueprint of a civilization could move along these like paro preferred paths into not quite perhaps a parotopia but definitely parotropic u notions and this is a concept that was developed by Eric Drexler and Mark Miller who we've co-authored a a book with and Eric has filled it out and and colored it in to some extent and Mark has. But I thought it was always really inspiring and I do think we can get to something like that hopefully because to some extent that is how civilization has evolved already to some extent just by virtue of the fact that over time rather than you know perhaps like cursing each other into specific into specific arrangements it has become more valuable for us to figure out how we can get to where we want to go by also helping others along the way by offering them potential deals that are also good uh seen as good by them on their own terms and so this kind of notion of corporation has has has a lot of precedence in civilization so I don't think it's impossible uh that we can make it that we can continue to enshrine it in civilization but I do think it takes a lot of architecting and mechanism design uh and a lot like you know trial and error to figure out how to do it and we don't have much time. Yeah, I guess that's a that's also an important point, right? Talking about what's paro optimal, it seems quite theoretical still. And so the difference between what's theoretically optimal and what can be implemented in the machine learning pipeline as it exists now or in the corporate structure of a company or in the the governments that are currently in the lead in AI development now. Do we have enough time to to implement something like what you're what you're thinking of here? Well, I think Drexler makes this point in his post on Partotopia, which is kind of interesting, which is basically that over time, maybe automation, robotics, possibly even nanotechnology enabled production makes the pie of civilization grow a lot like this GDP pie, I guess. And so actually the gains that you get by incorporating from each other even on let's say a deal that is per preferred but perhaps not you know like highly optimal for you are so big that it actually becomes super attractive for you to cooperate. And so corporation might become much more attractive over time if only we find you know like if only we find our way into it. And of course, you know, there's like many different ways in which I could go wrong in in reality, but I do think that it might actually be a future that we're moving to or can at least move to by just highlighting and getting very precise about highlighting many of these cooperative opportunities. And so one way in which civilization is currently sub-optimal is that it takes it has cooperation comes at a lot of cost, right? Like we have search costs you need to actually like go. So let's say, you know, I'm I'm probably not even aware of most of the ways in which I could cooperate very usefully with most of the world. I just don't know. I'm sitting here in the Bay Area. I'm I'm just not not aware of it. And that's because I don't have the time to go out there, roam the internet all day, look for other people that might want to offer me something or or other companies that might want to offer me something that that I would love to uh that I'd love to take on. But you know by decreasing search cost AI can you know like possibly help a lot with corporation. It's not only by decreasing search cost by but by also you know like finding a possible deal. Then helping negotiate the terms right like that takes a long time to actually negotiate contracts. That is a big big big transaction cost that we're all facing. Um and that you know really like prevents us from reaching a much more cooperative world. And then finally like the enforcement side of things is also you know like suboptimal in the way that we do it right now. So you could imagine a world in which you have these fiduciary AI systems that are really like you know kind of tied to your well-being and your welfare that you trust. Hopefully there's some privacy protections in place if they have access to a lot of your information, but that basically act as your multipliers out there in the world and that go out there for you and look for these better cooperative deals that once they found them, they negotiate terms that are actually acceptable for you and that would leave you up in a leave you in a preferred state and then like finally help enforce them and there's various ways in which you could do that too and I think we we could totally get there. we nothing stopping us from creating the these types of um uh these types of entities. But like it takes a little bit of it takes some intention. It it's something like as the world moves from scarcity towards abundance, it the strategy of corporation becomes more and more preferable to to each person or each company or each government. and and you have on top of that AI decreasing search costs and transaction costs and making the world more transparent. So you you have more options to cooperate and you have more knowledge of those options also. And so that's that's that's a that's actually a pretty positive vision. Let let's hope the the world goes that way. I think I think we should also touch upon the risks of decentralized AI development before we get further. Perhaps the biggest one or the one that I've discussed most on this podcast is just the risk of proliferating dangerous capabilities. So if you have say something that we would think of as decentralized AI development would be perhaps open source development and everyone can take the system everyone can fine-tune a model however however they want they can use it in ways they want and so if you end up in a system in in a world sorry in which it's much easier to say develop boweapons or attack critical infrastructure do cyber attacks and so on. That's that's what I see as perhaps the main downside of decentralized AI development. Do you agree? And and and what can we do to mitigate that? Yeah, I mean I think we're wouldn't be we would be doing us a disservice even if we're pushing from our decentralized path if we ignore the risks that are coming from it. would be you know we really be shooting ourselves in the foot and while you know at foresight I guess generally we have somewhat of a bias towards open source development with Christine Peterson our co-founder having been instrumental in running the term open source software like this like AI just brings a host of new problems it's not only that you know I think the decentralized development of technologies in general including bio nano just on its own, let alone, you know, multiplied by AI is risky enough, but you know, with AI, those all of those risks possibly multiply just because it allows possibly like a much much larger number of people with much less financial and other costs to develop um more and more civilization destroying technologies and and and that that that is a that is a real risk and I think that you know we are almost incredibly lucky that so far we've been able to sail through or let's say muddle through. We haven't quite sailed through. We have muddled through and so this needs to be taken into account basically. And I and I do think that over time this offense defense dynamic and the way that it's playing out is just something that um has happened over the development of civilization and but but now it's really supercharged and so we just don't know on the short run how AI will influence offense and offense capabilities and whether they whether anything will become offense dominant and so I think we need to really be be very very careful here. Yeah. So it's definitely a very big problem that decentralized approaches are facing. But I do think we can build decentralized approaches with these risks in mind. We just need to basically it's not the the kind of dichotomy that people usually think about of like either you have risks of runaway technology. So need a centralized actor to like have perfect surveillance and enforcement capabilities to crack everything down. Or on the other hand you have totally centralized technology development and and here yeah you you know have the have possible risks of this open source development but on the other hand we can think about what are actually like decentralized approaches for addressing these risks right how can you actually build in this differential technology development framework how can you build in this DAC uh framework uh that actually builds uh with taking care of risks as you develop the technology and building and building for that in in decentralized ways. I can go into some details or like you know into some Yeah, let's let's talk about some examples of of how to do decentralized defense. The one that springs to mind for me is is strengthening cyber security or developing kind of AI enabled cyber security. So having using AI to defend against cyber attacks for example that seems to be something that that can be implemented by companies in a in a in a decentralized fashion but would be quite useful in making the world more secure and and perhaps more stable. Yeah. I mean like I think one big benefit of decentralized uh technology development is that you just have more eyes on the ball. So that means that as technologies get developed and they might turn from a white ball into a black ball in like Boston's terms, you have more people spotting this, right? Like the more people or the more entities you have red teaming, the easier it is for you to figure out when something goes very very very arai. And I think many AI companies have found that actually that once they released their products is when they actually spotted some of the biggest flaws. And so we need to bring the kind of super intelligence of civilization to bear on uh to red team and actually make safe some of the techn technologies that we're developing over time. I do think that you know it's it's unclear whether just human red teaming or human vulnerability discovery will be enough. We might have to automate some of that too because the threats are getting automated especially on cyber security like you know it's there it's a very big question you know how much or how fast cyber cyber offense will be will be exa exacerbated by AI development. So we need to also think about automated tooling that we can develop and like AI and how it can help us in defense. But in a nutshell I think the more eyes we can bring on the ball the better it is. And I do think that even when you think about not only the kind of swarming around red teaming and actually like checking whether something has been developed correctly, but even just when you think about the architecture and the design of systems and how we could design them to be more resilient, their decentralization could also come in handy. So for example there is this really interesting kind of like prototype or like example that I often use which is the SL4 micro kernel which is one like hell of a bit of technology because it's a micro kernel that has is formally secure so you can actually verify it security but it also withtood a DARPA grand challenge or like a DARPA red teaming red teaming swarm and so that means On the one hand, it's combining two different sets of security for this approach of defense and depth, which is awesome. But on the other hand, it also the way that it's set up and the way that it works is actually, you know, using this kind of like principle of least authority by actually like really separating the different processes, threats, virtual machines, and the different components that it's made of into different subp parts that can then be verified. And so I think we really need to think more about how can we, you know, create even systems that are set up in a way where they're being built bottom up where you can verify possible subpieces of critical infrastructure easily and then built in this modular way more secure systems from the bottom up that are easier to defend and easier to attack and and and and how can you give access only really when it's required to for subp parts of a compon component rather than to a whole system in itself. So I think the decentralization piece just gives you so much more modularity in how you want to how you want to deal with access as well. So there's there's a bunch of different like architectural advantages as well I think for building it things in this way. Is this compatible with how modern machine learning systems are built though? It can they be separated into modular parts and examined one by one or apart from each other? Yeah, I'm not technical in uh in machine learning. So I I don't actually know if like in how like again I think it's to some extent a way like to some extent I think it's it's a way of like how you look at a system or at what I guess at what level you look at a system. So I think for example if you look at just AI development as a whole you know I sometimes wonder if you know we have we are running more towards a a world in which we have many different specialized AI systems that are performing specialized tasks or whether we're getting more of a let's say like a deep research style let's say uh AI system that like you know can can can perform or like or that can perform like a lot of different tasks. us perhaps even if paired with operator rather than we currently have it. Well, it seems like the the main the the AI companies in front are gunning for general systems, systems that can work as agents, systems that can solve a wide variety of problems. So basically they're they're trying to create AGI. It's perhaps perhaps it would be as we would have safer world and we would have a more pleasant world if companies were putting more energy into developing systems like Alpha Fold or systems that are that are superhuman in a in a narrow domain. But it seems it seems like the current strategy from from the main from the AI companies is to is to go for AI. I mean, yeah, it's definitely, I think, really within their mission statement, but I I do wonder like, you know, the way that we're currently set up, is it possible to push for more of these like super intelligent but very specialized entities that are then again enshrined in a larger civilizational architecture that, you know, resembles perhaps more a supercharged economy that we currently have. Uh, and there it's just unclear to me. you know, you you do have systems like deep research that I think are trying to just create this one-stop shop for, you know, for answering questions, but then you also have these incredibly specialized and really powerful systems like Alpha Folds who, you know, like we just saw that AI systems can actually be co-awwarded a Nobel Prize to some extent to their creators for the for the scientific contributions that they've made. And so it's it's unclear I think which future we're racing towards and I do think that you know at least we when we we do some RFPs that foresight about research automation and this is very specialized research automation. So we want to see a lot more automation of like specialized specialized research problems. So for example, there's like a really interesting prototype brain GPT that is really trying to create this AI system that can help you a lot with neuroscience literature. It's really trained on neuroscience literature which can already propose and then predict the outcomes of neuroscience experiments better than human researchers can. And so I think that if we just incentivize more these very specialized entities, if we create these like larger kind of like architectures in which they can cooperate, we can actively compensate for some of the centralized dynamics that perhaps many of the AI systems face. And of course ultimately it's an empirical question just like I think we shouldn't be kidding ourselves that sometimes it really helps for creating more intelligent systems to centralize everything back into one. But there again, it's a question about architecture design and and about how this centralization looks because at the end of the day, it really depends how you look at a company. Is it centralized? Is it decentralized? Well, it really depends at what level of the company you're looking at and how it's set up. But I think the important thing about decentralization when we name it is the fact that you have different entities that can keep each other in check. You have different entities that can monitor that development is safe and secure. that it's resilient. Um you have more and more eyes that are being brought to the table to like check whether things are developed in a secure way. So I think there are a few kind of properties of decentralized systems that including its resil their resilience that we have to kind of try to build in and bake in into system development as much as we can. And I do think that by actively supporting specific more specialized super intelligent uh you know subsystems rather than the very very centralized ones we can sometimes make make a difference. Yeah. I wonder if if we could have or develop and I'm thinking of humanity if if we could develop a a system or a civilization where you have decentralization at the lowest level and as you go up in in complexity. So say as you go from companies to governments to international coll collaboration you have more centralization but that central that centralized authority has authority over a a very small set of issues to be more concrete here I'm imagining something like you have a number of of companies in in in different countries developing AI in different ways that's relatively decentralized then you have national governments regulating those companies is to a certain in different ways. So you have some diversity there, you have some plurality in the way you're developing AI and then you have an international organization trying to prevent the most extreme risks. And so that's of course a centralized that's a centralized institution. But perhaps we could limit that institution's power to to only govern a a a small set of issues like for example the the development of new weapons or self-improvement in in AI systems. Do do you buy this vision of of decentralization and centralization at different levels of of civilization? Well, I guess if we want to build something like that, we should have had we should at least have this principle of like subsidiarity in there of like really only deferring to the upper level when it's actually necessary, but trying to make decisions on the lowest level of governance possible, right? But in general, um I don't know because from what I've seen from most centralized organizations is that they have a few other problems in the sense that you know first of all that would obviously still be the most like the whichever one is furthest up the stack is at the end of the day is the most powerful one, right? And so it does create this competition, you know, for power and possibly for the most power-seeking actors to rise to the top. So that's number one. So you have possibly a problem of internal corruption. Then I think even if these internal actors in this very very powerful meta entity are benign, they are still somewhat prone to extortion and extortability by external actors. So you know like you have this kind of single point of failure where if I wanted to attack that entire system I would go to the top right. You also have a problem that like you have mission creep and just like the you know the gradual expansion of the domain of interest and responsibility of whatever this organization is and we've seen that I think with so many governmental organizations that we have that have just exploded I think in their in their uh in their domains. So, I'm just not sure if we can create this actually in reality in a way where I would be like, "Yep, this this seems like it has some positive dynamics in place over time." I guess the more the lower parts of the stack would have an ability to compensate for the power dynamics of the higher parts of the stack and cross-check and monitor and verify. the more optimistic I would be about a scenario like that. But at the end of the day in this in your scenario, how would you do how would this entity decide what is actually you know an incredibly dangerous technology and like how would it actually enforce and prevent the development of this technology? Would it require something like um ubiquitous surveillance and ubiquitous enforcement capabilities or like how do you envision that because I think proofs in the pudding. Yeah. Yeah. I mean there are various suggestions for how to do this and it's true that the that you would have you would need to have some kind of surveillance but perhaps you could limit that surveillance to surveillance of the largest training facilities or perhaps the the kind of surveillance that governments are already doing to each other. Um and and and and hopefully you could avoid surveillance of of private citizens and and so on. But I agree I agree that that there are there are there are very difficult and and thorny problems with implementing something like this. The problem is just that if we don't have something like this, if we we face the other side of of this issue, which is just the potential of of human extinction or the potential of incredibly harmful events. Yeah. No, I I I definitely don't mean to sugar coat one side. It's always easy to poke holes in in someone's proposal than to actually come up with something yourself. But I do think one one thing I want to mention about this is that, you know, even if you could design something like that in theory, I'm really worried about that. It's just not the world that we're in right now. We are in a world in which there are many different centers of power. Like not a lot of them, but like definitely a few strong contenders. China and the US and Russia are like the strong contenders and have been historically over some time. So this is not news and and power balances are like dynamic and shifting and there's just a lot on the line. So I wonder, you know, like in this constructed entity that's if we're trying to move from this more multipolar world that we have, even though it's not very decentralized, but there's at least few entities that are holding power to this more uniolar world of like this, you know, one final checkpoint for dangerous activity. How do we get there? Because any effort that would originate in the US for moving there might not be super welcome and it might understandably not be super welcome by Chinese actors and they might not have like total trust in the fact that this is happening that this is happening to their advantage in Russia like let alone onboarding Russia or onboarding many of the other countries that are obviously also still very relevant. Um so it's really difficult to move there and if you try to just impose it you're creating this kind of first strike instability where you know it's it's almost like the more credible such a system that is perhaps US originated or US run or you know or something becomes to China or Russia the more of an incentive they might have to strike first to prevent such a thing because for all they care that is kind of a singleton takeover scenario that they're worried about and and they just have a very different perception I think of of how the world will work over time. So I think moving into these states or even just credibly signaling that we want to go there also comes with some costs just because we are really not in this state and yeah so I do think we we should be we should be careful or mindful of that. Well, I if if we want to design anything like that, I think the way to design systems that are preventing risks, the more open source we can design them, the more the more input possible and the more from the bottom up we can design these systems, the more skin in the game different entities will have to uphold it, to trust it, to check that these systems are true. And I'm not arguing for at all for a total technological proliferation with like no guards guard rails in check. But I do hope that you know we can create possibly using technologies like including much of the cryptographic technologies that have been developed for a long long time but that are still economyally somewhat lagging. um that we can develop basically new types of technologies that allow us to do some of the monitoring and some of the uh checking for uh risks without revealing without revealing all the information that we might have. So for example, one notion that I think is quite interesting is from Jillian Hatfield. So if we wanted to create anything like these kind of like regulatory entities, she has this notion of regulatory marketplace or regulatory marketplaces. I don't know if you've if you've come across it. No, I haven't. I haven't read about this. Okay. So in a nutshell, let me try to piece it together. So I think and and then I think there was a Gov AI paper actually that built up on it. So I'm probably meshing the two here or I might describe the version that Govi was using. But in a nutshell, you basically rather than having like one entity trying to regulate and control AI development, instead what you have is like many different actors kind of like coming to a type of agreement of like which kind of capabilities are dangerous to develop which we don't want to you know like prioritize for now and and and what's the kind of like kosher to go to to to develop for for the time being. And so this would a be more flexible than having like this one entity deciding but it's this kind of like pretty multi-polarly designed uh safety framework and once they agree this agreement is then basically enforced you multi multilaterally and it is enforced by this kind of like cryptographic monitoring fabric where you just have to kind of attest that you hit specific benchmarks that are still seen as safe or that on the other hand that you're developing perhaps things that are that you're not developing things that are not safe. And so over time you can basically have a have a monitoring fabric in place that does something what usually you think a government does but that is nevertheless controlled by the different actors building it that perhaps has also input from you know like a societal and civil society committee that can prove that this is in check with civil society's preferences and that is just a little bit more adaptive to the work the you know the pace that the world was moving in. And these are all I guess you know possible structures that we could be developing through these techniques that I think Ben Garing will often summarize under structured transparency techniques where you're using various different cryptographic tools and privacy preserving technologies to develop systems that allow you to test very specific properties about a system without having to reveal everything. And so there is an incentive for companies to sign up to this because rather than them having to reveal all of their model rates to everyone else and they might only have to reveal the things that really matter from a safety perspective. And so you could imagine at least things like this. Of course they're not economically viable or feasible or competitive yet, but they're not impossible to build either. Yeah, it sounds very interesting. The thing I would be most interested in there is whether such a system could be used to to prove that you are not developing a a a certain capability. That's a difficult one. I think that is the hard one. How do you show that you don't have some other training run running in the background developing a highly capable system that can help users develop bio weapons say I think that's an unsolved technical problem. It is an unsolved technical problem. Yeah. I I think I I mean that is one of the trade-offs of you know having having privacy in systems is that you just don't know what's not being done. Um that that's a much harder thing to test for. But there is at least you know you get I mean the good thing about this is that it would create a little bit more trust and it would just we don't need a perfect solution. We need to create like enough trust and goodwill and enough ability to coordinate over time that the incredible uncertainty that we're under right now is just a little bit more bound. And you know historically at least there have been prototypes in cases where this has been tried on a very very small scale. So for example, I think Binnie when he was still heading the NSA had this prototype within the NSA where he created this system called thin thread where basically rather than some of the NSA agents having access to unencrypted telephone data and information, they would only see encrypted information. But then when specific trip wires were triggered that would cause the system to think that they're now discussing, you know, terror terror or or a dangerous or dangerous capabilities, then that information would get revealed to the analyst and at that point they could you'd have a human looking over it. And so what this allowed is that they were able to monitor a lot more of the of the data coming in but only when specific yeah when specific trip wires were triggered actually a human was able to look at the at the information and so this was an internal way in which the NSA was trying to keep itself in check but it was really just a prototype and I think it was discontinued never saw the light of day even though it had really accident tests but these are things that we can be doing to create more checks and balances and even if they're not perfect, they can at least move us gradually into a world from which we can then develop the next set of solutions. Yeah. Yeah. So, one theme I see running through your writing is that you you wish to uh you wish for humanity to continue the way we've been corroborating in the past into the future. And the future will involve an increasing number of AI entities that we have to cooperate with. So you write about scaling cooperation which is about beginning to cooperate AI to to human AI to AI and humans staying in the loop of of civilizational cooperation as we integrate more and more of these AI entities. How do you envision that? What are what are the main challenges? Yeah. Well, I mean I think it's a very big question. I I know but but it's also a really exciting one I think to think about because we can we're now really in a position where we are building some of these some of these entities. So it's a very timely question but you know if we like we are we are developing currently AI systems many different types of AI systems that are quite different to you know other cooperative partners which were mostly made of humans that we that we that were really like known or that that we're used to cooperating with and so these AI systems you know they don't share our substrate they have evolved in different ways that we have not evolved to parse. So for example, if even just in a normal social situation, you know, like the fact that we're biological creatures, you and I, the fact that, you know, we have evolved in like similarly cultural context with a similar biological evolutionary history gives me at least like some confidence or like some bounce in how you will react to a specific situation. And it's not perfect, you know, like there are many different ways and for how to trick this. But over time, we've become pretty good at figuring out culture, norms, institutions all the way from like contracts to like more more fancy ones to incorporate for human beings. And so I think that if we want to include AIS into this framework, we need to update much of the game theory and we need to update much of the institutional mechanism design that we've relied on for so long in civilization. And that's a really really exciting task because on the one hand you want to avoid that AIS can you know collude out incorporate deceive you with the new types of strategies that they might have that we don't just haven't evolved to to track yet. So for example, this is I think work from Christian Devid and a few others on stanography and how AI systems can already, you know, like hide messages pretty much in plain sight to humans, uh, but pass them on to each other that are only readable by or decipherable by the other AI entity. And so they are just by virtue of the fact of how they use and pass language, which is very different to how we do it. Um, they might already be able to uh to collude and deceive in these ways. And so those are new ways in which they can deceive us that we've not evolved to past. And so we need to kind of get better at detecting this. Um and then on the other hand, we also really want to draw on their new possibilities or like the new opportunities that they give us to cooperate much better. And so there's this paper from Andrew Cridge and I think Stuart Russell and a few others where they discuss open source game theory and open source basically in in that paper they lay out these basic bots that can read each other's source code and try to see what types of yeah what what types of cooperative deals they can come up with And so, you know, one benefit of open source bots is that when you and I try to enter into an agreement, I actually can't look into your head. I have some somewhat of an understanding of whether or not you'll deceive me, but I actually can't check whether or not you will. Even if you tell me that you're cooperate, will you follow through? I don't know. But open source systems can actually check uh how how the other one will respond. And so they might enable us to create institutions which are much more collaborative and cooperative and where conditionally upon the other one possibly cooperating if you do a specific action you can then offer them a very like much more favorable terms. Of course, they it's not all rosy what they lay out in the paper, but like I think we can use that to our advantage. And so, we just need to start building. We need to start developing theories around the game theory for this. And we need to really like importantly also start trial and error and testing a few of these systems. Do you think we will need to limit AI systems in perhaps their speed or their memory in order for humans to stay in the loop? So I'm guessing that in the future there will there will be a temptation on the on the part of the of AI systems perhaps directed by humans to collaborate at a at a at a very high speed and to perhaps draft a document in in 10 minutes that would have taken humans weeks or months to to create. And so do we need do we need kind of artificial limits on these systems in order for us to to stay a part of of the way that the world works and how we cooperate? Well, there's the question of like do we need them and then can we um can we enforce them because you know especially if like in in that scenario it sounds like it would be much more competitive to actually create these contracts faster and perhaps those that evolve and experiment with doing things faster are the ones that you know that would just have or that are that are better at creating economic value very fast. So it's going to be very difficult to throttle that I think artificially especially yeah throttling I think it's it's difficult um and and if you think about it like you know the way that we've evolved is often by you know having a technology come in as being kind of overwhelmed and then getting better at using it and getting better fast at using it. Um, and so one I think interesting example is like you know then when I think Christine Peterson actually our co-founder told me this. I think when when the ability to create to create newsletters with your own fonts first became uh possible. We basically created the most outrageous newsletters at foresight. for that is um was founded in 1986 because no one was able to use fonts correctly and and it was just all over the place. It looked absolutely atrocious you know now we're like really good at creating good template libraries and and good style guides etc. And of course that is a very low stakes example here. But even in the way that we create contracts, right? Like we we we we basically I think get first overwhelmed by the possibilities that a technology that a technology creates and then over time we get better at adopting for it and uh and and and creating templates that then other people can reuse and just think better about it. So I wonder if you know one like I'm not sure what normatively I think we should be doing if we if we should be throttling them or not but I think like perhaps realistically what we should also be thinking about is like how can we make humans better at keeping up with with the speed of them and and actually using them and leveraging more. We have like a very big neuroscience and neuroch program in that area that we're really trying to to use to like help humans get help humans get up to speed be able to compete with merge with collaborate with AI systems. It is a little bit more speculative and out there. I'm not sure if you want to go there but but but I think that you know one way in which we can stay relevant in this world is by also improving our ability to to make use of these systems and whether that is through neuroscience and neurochnology or whether that is through having AI systems actually help us parse this information uh better or whether that is through AI enabled forecasting of different scenarios. that would play out if this contract would get enforced and simulated out. We can also just use these tools to kind of get up get get us up to speed. Yeah, I guess it's a it's a question of time whether we have enough time for us for us as as humanity to ad adapt as we've done to previous technologies. We've adapted to the to the car to the to to factories to industry in general. We've we the world has changed tremendously over the last say 300 years and we've adapted to it in some sense. I I I think we're still adapting to the internet and social media and so on. We haven't kind of perfectly integrated and understood those technologies. And if that's any any indication, it takes us perhaps decades to to have this this back and forth with the technology and and adapting adapting it wisely and seeing what we don't like and changing that and so on. If it takes decades, we might not have enough time with with AI. things might move so fast that we don't we can't rely on this on our usual procedure of of trial and error and updating the technologies along the way what then if we can't if we can't rely on that how do we quickly get up to speed yeah I I mean we're going to find out because like when you said decades that's long decades is long. I'm not sure if we have decades. Um I I don't think so. I think we need to be faster than that. I really do think, you know, on the risk of repeating myself, that we need to enlist AI technologies to help us with this stuff and we need to just become better at integrating this into our civilizational defense strategies rather than just, you know, the ancient tools of cooperation and of sense making that we've used until now. you know I know that you know there's this is going to come with risks as well and we need to you know somehow just try things out but even if you just look at it would be very very good if we could just even predict the pace of progress and not just the pace of progress but even how different technological developments will influence each other and even here we can I think enlist AI to help us with this so you know I know that Anthony from FLI is also one of the co-founders of Metaculus. Metaculus has been one of my favorite projects for so long in the bay. It's a really fantastic forecasting platform and they just launched like a a bot forecasting challenge. And so that basically means that you know rather than perhaps the forecasts that have been going on on metaculas for a really really long time where human forecasters predict and gradually get better at making sense of the world, we can enlist a lot of AI systems that never get tired that 247 can predict on a variety of different scenarios and give us really good simulations about outcomes of specific actions. We need to lean into these technologies and we need to enlist them to help us make sense of the world. And I do think that we're receiving so many grant applications right now which are building on new ways of prediction theories and uh basically even like having different AI agents debate each other before they predict out on a specific scenario again and I know this is very in the woods but like there are when you look under the hood there's just so much technology that we can develop now and that we can learn from all the way from sensemaking to cooperating better to to really like leveling up ourselves and our knowledge and our ability to to to come together and form agreements like there is a lot we can do and I think that we just need to we need to upskill. I remember when I went from high school to university and I was like wow this is different. I need to I'm not prepared. I need to I need to level up. And it was one year of learning 247. And it was a kind of like a pace of learning and topics that I was just not prepared for. I think coming from a German school system into like a it was the UK university system. It was just like not things that we were taught in the high school that I was in. And I was like, I'm not going to make it. And and then I leveled up and I really leaned in. And I think we're at that point now for civilization. Like we need to level up and lean in. It's possible. So what do we need? Do we need world leaders and and company leaders to have AI advisors to to help them look at forecasts to you what what's the what are the best options for leveling up and for using AI to to make better decisions? Well, I mean it's kind of like chaining together a lot of the individual examples I guess that I mentioned all the way from like you know using better AI simulations for helping us achieve and and get at like possible trade-offs of specific because we will be facing so many highstakes situations in the few years to come. And so just having really good forecasts of like how different outcomes might affect other outcomes and conditional forecasts etc. and being able to simulate out these scenarios really well would be awesome. That would be one thing. Then the second thing is, you know, the ability that we started the the podcast with, which is this ability individually to just cooperate much better and to create this kind of like really cooperative almost like superructure of civilization where you, you know, you're just like out there and really imshed and ingrained with other people. That's great. But we can also bring that to bear on like larger challenges, right? like it's not just like us individually getting more of the things we want but ideally we also want to use these AI tools to find ways in which we can come together on the large challenges like the you know like mo traps the multipolar traps the kind of big challenges that we're facing and so coming to perhaps a few like you know really high stakes agreements over time that AIs can help us reach agreement on and force monitor that would be fantastic and there's like plenty to choose from. But just trying out a few of these more larger way kind of like multi-way commitments would be great. Do you think decision making at the highest levels? So at the level of of world leaders, do you think that's do you think what's lacking there in order for us to get higher quality decisions is information? Do you think decision- making at the highest level levels in the world is is constrained by lack of information and forecasts and so on? Well, it's not just at the highest level. You know, I'm trying to also get away a bit from the highest level um in the sense that Eric Drexler in this book in 1986, engines of creation, he laid out a framework of design ahead for specific technologies. And so this modeling out the risks and benefits of a specific technology is also very beneficial on a local level. So I think for everyone developing tech and everyone you know like developing like let's say even AI for bio etc like having a window where you can design ahead and that's not only predictions right that's also simulation software that's getting much better having a window in which you can first design ahead the systems that you're then building and and model out what these systems will look like gives you this delta between this the level that you can already create things in the physical world at and the level of technological development that you'll be at soon and those risks and benefits and even just creating this window of design ahead allows you to then create these more differential technology development strategies to build out for the risks and so I think this is not just something that should be done at the higher level through prediction markets but like through modeling and simulate simulating out possible positive scenarios but also possible negative scenarios for specific tech and for the interrelation of that tech with other technologies, right? And I guess yeah, I think we should it it requires all of us to take a little bit more responsibility, I think, because the world is going to get much more complex. And so the better again we can get at like building things safe and securely and red teaming them from a local level upwards, you know, the less problems we'll have to deal with on the upper levels of the echelon. Mhm. We've we've discussed a bunch of options we have for developing technology for cooperation we could call it and so technology for for humans to cooperate better and for AIs and humans to cooperate better. I guess one worry there is that we don't have that much time perhaps until we have very powerful AI and many of these technologies are still in a phase where they're not ready to be be implemented. This is something that you you've written about yourself. The reality check of whether we have enough time to use these technologies if we have very short timelines. Say if we have powerful a AI before 2030, how do we grapple with that? Do we need a plan B? What do we need? a few months ago when you probably seen have seen this too on the internet. You just saw a lot more basically very short timeline scenarios and there were either you know specific forecasts or there were fictional scenarios where people were grappling with like two to five year timelines. And in most of these scenarios, there was like the outcome was pretty centralized. either in a negative case of an AI singleton taking over by creating mirror life or something and only like a small fraction of humans surviving or by or it was positive positive by having you know like one world government come in or or something and swoop up and and and take on take on technological development. And I think in many of the war games and like tabletop exercises that I've seen people do, these are the main kind of the main solutions that people reach for, right? Yeah. It seems that if we have short timelines, AI development is probably going to be more centralized, right? It's it's it's as if the game board is now set and we know which companies and which governments are relevant and we know which therefore which companies and and governments are not relevant and and now it's just we are we're moving quite quickly towards uh powerful systems. So is there is there a sense in which short timelines is is is a bad sign for the technologies and the the the ways of cooperating that we've been discussing. Yeah. I mean it certainly feels that way now just because of the lack of imagination I think of coming up with decentralized multipolar multistakeholder solutions on that could work on short timelines. But on the other hand, it's also not totally said we don't just have open AI. We have more than open AI. We have many different many like a few main AI developers right now. And the more the more time passes, the more other actors are coming online. I think you know like yeah, Deep Seek might have been like you know just one case in point but like nevertheless that happened this year. Um, and that was kind of came out of the blue. And I do think that, you know, open source usually or the value of open source system design or like more open designs takes time because it often takes time for different systems to build up on other on the technologies developed by other open source designers and creating this kind of like very rich infrastructure of open source design. So yes on in general I would say longer timelines favor these more kind of like open decentralized approaches but we are already seeing like steps into those directions and to the extent that we have a lever on how systems are built we can actively influence that by building in a open collaborative way and by developing more specialized sub aents that then can be used and in in these more open source framework frameworks and designs. Mind you, we also need to create frameworks for how to deal with the risks that can be exacerbated by open source design. I'm not just saying that just do it all in the open and like just not worry about the risks. But we can actively I think push the world more into these directions and many people are doing that right now. Many companies are doing that. I think even Sam outman said that you know he wonders sometimes if he finds himself on the wrong side of history by having pushed for perhaps a less open technology development within open AI which had originally I guess set out to be perhaps a bit more open than acrony. Um so I think we are we have definitely seen this year somewhat of a shift at least mentally and ideologically to these systems becoming a little bit more on vogue for lack of a better term but yeah the proof will be in the pudding to some extent and but I think you know we are making the pudding uh we we we are by you know by the by the work that we're funding by the work by the systems that we're building by the way that we're cooperating by the systems that we're like praising um we are actively creating you know milestones and and north stars that become more attractive. So I do think even though the chances don't look amazing right now because we have these very very large entities they're also not impossible. And I think the important thing to think about is like where does decentralization and open source perhaps actually like deliver like a few um unique advantages that centralized systems don't have and a few for example that is now more speculated from the crypto world but you know AI systems are like pretty good at like solving these like general problems but like at some of these like longtail problems that require you to kind of solve edge cases or require perhaps to like bring local knowledge to the table. They're not really good at like gobling up that that that data. It's just not within big data usually. But here systems like even within crypto and web 3 are pretty good because they can incentivize individual actors to bring local knowledge to the table, right? Through like Dows or through decentralized marketplaces or smart contract design, what have you. And so you can draw on the kind of areas in which centralized systems are perhaps not as competitive with decentralized systems to try to build more there. Another one for example for privacy preserving tech is applications in let's say healthcare or finance because that is just not something that centralized AI can really touch often because it requires the handling of very sensitive data that often by law or just because people are not willing to share it uh centralized systems can't handle and so if we rely on various of the privacy preserving technologies to create ways in which we can have these more federated approaches and privacy preserving approaches to how we can make sense of data um in decentralized privacy preserving ways without having to go through centralized systems. That's another really big step in which we can create specific niches at least in which decentralized systems can possibly out compete centralized ones. But ultimately, yes, it will be an empirical question. But I think the more we put compensating dynamics in place against those and the more we actively build the yeah built with that future in mind the the likelier we are to arrive there. Is there perhaps another thing working against decentralized AI development which is that it's that the way these modern machine learning systems are trained is very capital intensive. It requires building large training clusters and drawing lots of power and so on. As we scale up these systems, as we build more and more expensive clusters, I don't think we've solved the technical problem of of training um say decentralized training. So, so training one system in in say 20 different locations and then combining it in a certain way is is is that working to working against the decentralized approach also? So capital intensive and just the need to build out a lot of a lot of hardware in these training clusters. That is definitely still I think the default path that we're on. you're right about that. But there are specific at least prototypes that are trying to do differently, right? Like there's Prime Intellect that is trying to do more enable more decentralized training runs and there's various other projects like that out there. Uh and whether or not they will ultimately be competitive is a different question, but like people are using those. uh they are actively being developed and more and and I think the more and more um people wake up to the power that AI development holds, the more incentivized individuals are also to like take part in these more decentralized compute clusters. And so I certainly know of like a few efforts that are trying to build more decentralized either community held or individually held compute clusters up and uh and see how how they can collaborate with each other and I think that's kind of inspiring. So, you know, there are probably specific there are specific problems that the centralized ones will always be better at solving. Absolutely. But the more we can incentivize folks to come together to build up these more alternatives, I think the the more likely we'll we'll make them to to at least be viable in in specific scenarios. and and they have already proven to be at least you know at least like to produce interesting solutions for like sub problems. So I I can't say whether or not I'm hyper optimistic here to some extent. It's it's an empirical question and it's a it's a question just about like what is actually required to build very powerful AI systems. But but yeah, like so I think my answers are always mixing to some extent like a normative with a descriptive because because we ultimately have to kind of choose the path that we think might normally be better while still being realistic, possibly enough that it's even worth striving for it. Um yeah, so it's it's it's going to be difficult, but we have it in our own hands. Yeah, when when you think about the future of AI, how much do you draw upon history? Which lessons can we take from history? Because it seems like AI could be so different from many other technologies we've seen in the past. On the other hand, humanity has has also undergone technological revolutions before and it seems that we have we have an an ability to adapt. H how do we which lessons should we take from history? There are plenty and I'm I'm not a historian but I think two things perhaps that are just like often popping into my head is on the one hand when you think about this kind of long-term game of centralization versus decentralization and offense defense which is going to work out. you know, we've played that game for a long time and the question is maybe like you know both or it will be a continually evolving uh continually evolving force like we often have have it such that like systems start out decentralized then over time it gets you know more economical to centralize some some of these tasks together and then over time you know there are some of the trade-offs of centralized systems also that they're not as innovative So people start innovating in specific subpockets and either break out or like fork out of the larger system or you just have competitors coming in that are able to do what is whatever is relevant in the new technological develop reality much more concisely and specialized and then they grow up over time to become the dominant player in a specific arena until again different technological realities become available and different subsystems are built and so I think we've just seen this in the history of yeah of of the development of everything from the mainframe computer to uh possibly the internet to social media to even within crypto we've just seen this kind of play out over and over again and to some extent the question is just like will it eventually crystallize into one sub substrate like will I make this different or will it just be the same you know I find it hard to imagine that that we won't be coming up with like better AI systems and uh in a more specialized way that can just do things just with a very different focus than they had been previously developed and then out compete the old generation of AI systems again. And so to some extent I find it very hard to think that we currently are putting a stop to this entire development and we'll just like lock in this one AI or or otherwise world government or like or private actor that will lock down progress forever. you know, I find that just very very very hard to imagine that that's the way that the world will will turn out and will crystallize in this way and then so yeah so basically that might give us some hope you know there's always like something that that will just become more competitive over time and then the other one is that sometimes you know for everything that I've said about like you know trying to leave the future open trying to you know allow us and our descendants and other entities to kind of like kind of reinvent the rules from within the game. Um, and having perhaps like a little bit more of an open approach to the evolution of the development of AI systems. Nevertheless, we can also point to like some historic examples where at least we have been relatively successful at putting systems in place and that lead to the baseline conditions being set up relatively well. And that was you know obviously the US constitution and many people have many problems with the US constitution constitution it but nevertheless it was good enough that it got copied a lot. Um, and it also is good enough that to some extent we're still living more or less in that world. And that is something that they have gotten extremely right. I don't think that well I don't think if you talk to the founding fathers now they might be incredibly thrilled about the current expression of the constitution but I also think that they would be flabbergasted that in a world of such technological maturity it's still to some extent works more or less at putting in a framework within which we can then invent the next better systems and so now we are again in that same scenario where we have to set something like that up for a world where which will mostly be run by intelligences that are vastly smarter to us. Isn't that an argument in favor of of value lock in then if the con if the US constitution were able to last say hundreds of years and is still influential to this day? Isn't that isn't that an argument that the values we we put into to our AI systems now are going to persist in the future for a long time? Well, I think what's interesting about the constitution is that it didn't really do that. What what the constitution did is that it actually just put a system in place by which of checks and balances and of procedures and of rule of law so that different value systems could flourish and thrive and counteract and compensate and keep each other in check. So, you know, and and I think that's what we need to do, right? We need to we can't just like totally go hands off the steering wheel and be like, "Okay, well, we're just going to leave it up to the evolution dynamics at play, but we need to create systems that make that make opportunities for cooperation and the benefits for cooperation on shared goals easier and that restrain power centralization and that restrain, you know, actions that would like basically wipe out the entire playing is. And so we can put these side constraints in place without necessarily having to say all that much about value alignment or value lock in or exactly what type of future we would want from an ethical perspective. But we need to create the playing field with which then the next civilization has a chance to iterate on the game, make up new rules and and and play it forward. And I think for example, one crazy scenario that we might have to think about soon, and this sounds sci-fi, but I don't think it's actually that far off is space property rights, right? So if you look at the evolution of property rights or the I guess like the innovations within property rights we had a lot of time for this and there's many different theories of property rights but you know we have come up and evolved with systems that allow us to create rights to do things with objects. So I think like property rights really like are a right like they give you a right not like the right to a thing is a right to to do a thing with it and we see that I think a lot right now when you know for example I think doto had this book on basically people trying to come in I think to some Latin American communities or subsaharan African communities and try to like draw up new land titles for specific 3D plots of lands but they didn't match at all how the communities actually wanted to use that land. So for example, oftentimes it wasn't just a this kind of like you know different plot of land but like different people had to go through to go to a waterway and like so that land was used communally for parts of the day and like privately for other parts of the day. And so you know the the way that we had drawn up property in the west just didn't really quite work out that way for these specific use cases. And so there you see really that like you know the right to a thing is like a right to to do a thing with it. And the same I guess for you know how we cut out uh property rights before we knew that like radio spectrum was a thing and we needed to update property rights. Anyway, we just had a lot of time for this and we had a lot of time to adapt and we're still adapting. There's like pollution credits now and like you know all kinds of like noise pollution problems that we're still tackling with. So it's not like we're perfect at the property right problem, but we've had a lot of time at adapting at adapting to now. We might soon have to rethink that drastically because if we create incredibly powerful technologies including AI that might want to use make use of space resources aka possible property the kind of 3D slices of the world that we currently are drawing just don't apply there right like you know it's a lot about like what actually has like exposure to the sun you know how like yeah how does energy work in in in space and you know can you build a Dyson sphere between me and the sun? Yes or no? Like you know these things will become like not soon super relevant but like soon enoughish that we have to drastically rethink the way that we've done things and we don't have much time for that either. And the fear is that if we don't think things through or don't simulate things out possibly or model things out, then it's just going to be yeah come first and and settle and and and that will be who owns what. My my impression is that we have actually attempted to create some some space law or property rights for for space. But the but many of these many of much of this legislation is from the 1960s or 70s something like that where they they were probably not thinking of an AIdriven kind of race to to settle space. And so it's yeah, it seems it seems right to me that some of this legislation should be updated in light of what we now know about about AI and what's possible. But it's it's a difficult problem. It's it's difficult to even know where to start there. I think property rights have been a good technology for for cooperation on Earth um to allocate resources efficiently and so on. It's it's unclear how that maps onto onto space where the property rights will be useful there. Also, the the example you mentioned is kind of kind of interesting. Can can you harvest the energy from the sun such that no one else is getting sunlight and therefore they die basically? Yeah, things might look different in in space is what I'm saying. Yeah. Yeah, they surely will, I think. And I mean you already Yeah. some of the current space treaties that exist are either some of them are outdated for some some countries just haven't signed up then I think um you know when you bought Starlink I think you were also signing that you weren't accepting specific space treaties I think they they were about Mars though so it's not necessarily clear that all entities will entirely uphold those that that will that will have a claim to space. And and the interesting thing here is also that even again it's the same thing that we talked about earlier like even if you could come up with a perfect system theoretically and abstractly having enough skin in the game for everyone to uphold it that will become a relevant actor on that sphere. That would be a really difficult one, right? Like let's say because you need to create enough skin in the game for all entering parties in that arrangement to for them for it to be continue to be more beneficial to uphold that system that you're developing rather than try to overthrow it. And because it doesn't have any legitimacy from the get-go yet because it hasn't been around for millennia and we can just point to the fact that we've always done it that way. It's going to be very difficult to come up with something in this emergent way that has enough legitimacy that we'll all we'll continue using it. Yeah. I mean, there's there's a temptation if you're in the in in front or if you're the most likely to to settle space, you're you're kind of leading the race. there's there's a temptation to not care about any restrictions, to not allow any of these treaties, potential treaties to restrict your your settlement of space. Now that I'm thinking about it, do maybe we have some ideas for solving this because we've solved this problem on Earth before. Perhaps settling unclaimed land or settling the seas or something like that. Oh yeah, we actually so it was a chapter in the book this book gaming the future that we wrote that some of these ideas that we discussed are coming from that we then cut out because it was just too wild and speculative but like one idea would be in how like basically we on the one hand we're facing a problem with AI automation and that like you know we will many people will lose their jobs um and so we and the with the more capital you have where you enter in any strong AI or TAI world, the better you set up in that world. And so, you know, as your labor or like, you know, like your Yeah. As your labor becomes less valuable, like capital will always be something that AI systems will want from you, capital, land, resources, etc. So, we were thinking, how can you actually equip people with a way to continue to remain relevant in a strong AI world and where they have something to bring to the negotiating tables with AI systems? And one way to do that is by giving them by giving them capital or property. And where could you take that property from? Well, there's a lot of unclaimed property in space that will soon be claimed by whoever gets there first. And so if we want to avoid that it's claimed by whoever gets there first arbitrarily then you know we should create something called an inheritance day which again Eric Drexler brought up in engines of creation in 1986 which is a day by which the remainder the remainder of the at least accessible universe like the light cone or however you want to slice it up will get divided across everyone who lives. um and you can then trade on the expected value of that property with whoever wants to go out there and make use of it. And so this would kind of like solve the problem of uh capital that you have going to a strong AI world with the other problem of we have to solve the space property problem and would give everyone like really like a very cozy and cashy financial start in this AI world. It's there's a lot of space out there space or like a lot of useful resources. We would be a very a very privileged group, us kind of the currently existing people, if we if we all got a large share of space. But I guess I guess the problem is then why would an AI that's settling space respect any of these property rights instead of just doing whatever it can to acquire as many resources as possible? Well, we don't know that. Obviously, we haven't built these systems yet. But if the earlier we put these systems in place again the more we have I mean there's a bunch of different problems we're running into the problem even of just like how do you divide up that space you know like for example you first only make part of this deal whatever is available in the next 10 years probably because things are also moving away faster. So like it's very difficult to even see what will become available at the time that we can settle it. But like there can be different times releases and delays and people have actually thought about this. So there are some of these solutions but the idea is that if we could come up with a solution the earlier we come up with it and the more well first if we can come up with one we need to also make it one that many other nation states and their citizens and civil societies adhere to it. So that would be the first one. It's not just like how do AI systems adhere to it but how do we make it even like a stable shelling point for other humans and there are various governing bodies and once we have done that then I think we can enshrine it to some extent in you know in the architectures with which we're cooperating through with AI entities and yes if they don't uphold any of our legal systems and if they are entirely are just going to blast right blast right through it of course they're not going to uphold this either. Like I think the earlier we can come to a mutually agreeable solution that at least civilization can uphold or more or less uphold can enshrine that in contracts can enshrine the contracts in code and have that be the reality in which these AI systems grow up in the more likely we'll make it that they will also consider it legitimate rather than if we like suddenly um bring that to the negotiating table just like oh by the way we all granted all humans. uh these these wonderful space property rights you know yeah so it's you know we're very much in speculative land here but but again we don't have much time these are all problems that we are facing now and so we need to think about it to what extent do you think technologies are created versus discovered and I'll explain what I mean here if we say a technology is created it means that we are we're we are choose we're making choices that influence how that technology turns out. And it's the technology is more or less what we wish to create in the world versus technology being discovered which would mean that it is it is us exploring the tech tree say and stumbling upon some technology that works in a way that's just inherent in in the technology because ultimately of the laws of nature and and the laws of physics and so on. Do you think we are do you think we are beholden to discovering technologies that then just work in ways that we we can't foresee? Because it seems to me that that for example the way large language models have turned out is mostly us just in some sense stumbling into that paradigm and and then it works the way it works and it's it's not a set of deliberative choices. Yeah. Again, I'm way out of my depth here um because I'm not a technical scientist in a specific domain. So, take anything that I say with a huge grain of salt. This is almost a question of philosophy or of history of technological history. It's not necessarily a technical question. It's more a question of Yeah. at the grandest level how technology works. Yeah. But I I think different people have very different theories around it. You know, I've definitely like there are strong proponents of both of these theories basically. And there are examples of both, I think, to some extent. And actually, I came across this really interesting blog post the other day where someone was trying to calculate how useful very large scale scientific problem projects have been. Andrew White, I think at Future House, wrote it and was just like calculating how much CERN cost and how much we actually discovered through it and whether that was a useful thing to do or yes or no. And then for other things too like the genome project etc. And so he was just trying to see like was this actually a useful undertaking? Can we do anything that is as directed? Yes or no? And he was relatively pessimistic after the post but but you never know the collateral effects. I guess we could still look back on this in 200 years and have a very different opinion about it. But okay, so back to your question, to some extent, I think we're discovering things about the world. And in another way, we can still influence that path, that tech tree usefully by ordering the arrival of some of these technological developments. And so here we're back in the offense defense dynamic. And so the reasons why like at FOET we're also building tech trees and I've done a little bit in collaboration with FLI even together. And the reason why they're useful is not necessarily because you know that like you know eventually you will like tick up all the different capabilities on the tech tree but it is because by making the tech tree clear and creating the common knowledge around it you can sometimes incentivize development of specific technologies. first in tandem with each other and that also means that you can sometimes incentivize the development of safety and security enhancing technologies first within a tech tree. So sometimes the ordering of technological development is important and by you know by actively putting resources towards developing let's say getting if we put a lot of resources towards getting much much better computer security very very very fast um before we have a world of strong AI I would be already a lot more optimistic that we can basically I think computer security is a super undervalued issue still and I know that like many people you know in our respective subcomunities I think are also caring about this and have have cared about this for some time and are definitely gearing up to care about it a lot now but a lot of that work is focused on AI security and securing AI labs rather than on like general computer security across civilization securing all the physical infrastructure including electric grids nuclear facilities etc like that we're really bad at and so that's going to really bite us in the butt um for lack of a better term. And so by if if we were in a much better world in which computer security was much uh stronger, I would be much more optimistic generally about the the future of AI development. And so you know if we had a tech tree that laid that out perhaps and had laid that out a long long time ago, maybe just maybe we could have built systems more with that in mind. So sometimes I do think that by just creating common knowledge around and simulating outward and designing outward what's possible you can influence the ordering of technological development to some extent not perfectly but to some extent. How should we think about having children if we are at the cusp of developing very powerful AI? Now, this is definitely not advice, no prescriptive, nor anything because I'm not even a parent yet, but I will be in three weeks if everything goes well from now on. So take this with not just a grain of salt, but like here's someone that does not even have kids yet talking about that future, but it's definitely someone who has grappled with it because I've created what will soon be a like a a child and I've done so very intentionally. So I think that to some extent we have to believe that the world will continue and we have to believe that we have a future and that is to me the almost the number one necessary condition for us to have a chance at having a future at all. Um you know we again with in collaboration with future life institute we were really leaning into our existential hope track at foresight and created a lot of world building around positive futures and positive worlds and how a positive world with strong AI could look like in the next 10 years and I think if that like entire our entire work on existential hope has like you know taught me one thing is that like it's really important that we at least have a grain of hope in the fact that we can like make it through because if we don't then then then the chances go go go way down. You know, I think if we're only ever focusing on the world that we're really trying to avoid while never actually actively building for something, you can do that all day long. You can just try to prevent things that you don't want. But if you never build the stuff that you do want, like um then there's just less and less even to strive for because you're not putting anything in place that you're like that actually makes life worth living. And I think this is probably one of the most impactful things I will have ever done in my life is to have a child. Definitely one of the most formative and it's also what makes life worth living at the end of the day. And so I do think that without creating these like beacons of hope for yourself like then you know what what's the point of it all really. So I I don't feel I don't feel guilty around it. I have no qualms around it. I'm pretty excited about it and I think that to some extent hyperstition it's definitely not a real thing but like you know if all of us engaged in the world in a more positive collaborative lens rather than in only zero sum dynamics and we might not make it through and you know the whole economy is going to crash like you are creating to some extent the world that that you live in just by virtue of the fact of how you show up every day in the world. And so this kind of like local living of your values is important. Yeah, that's a that's a good way to to end this interview, I think. Allison, thanks for chatting with me. Thanks a lot, Gus. It was very fun.

Related conversations

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

AXRP

1 Dec 2024

Evan Hubinger on Model Organisms of Misalignment

This conversation examines technical alignment through Evan Hubinger on Model Organisms of Misalignment, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -6 · avg -7 · 120 segs

AXRP

11 Apr 2024

AI Control with Buck Shlegeris and Ryan Greenblatt

This conversation examines technical alignment through AI Control with Buck Shlegeris and Ryan Greenblatt, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -6 · avg -9 · 174 segs

Future of Life Institute Podcast

7 Jan 2026

How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

This conversation examines core safety through How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 85 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.