Library / In focus

Back to Library
Future of Life Institute PodcastCivilisational risk and strategy

why openai is trying to silence its critics with tyler johnston

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from Future of Life Institute Podcast. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 65 full-transcript segments: median 0 · mean -4 · spread -336 (p10–p90 -130) · 3% risk-forward, 97% mixed, 0% opportunity-forward slices.

Slice bands
65 slices · p10–p90 -130

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes safety
  • - Emphasizes ai safety
  • - Full transcript scored in 65 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation.

ai-safetyfli

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video jqPDc9JpOc0 · stored Apr 2, 2026 · 1,730 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/why-openai-is-trying-to-silence-its-critics-with-tyler-johnston.json when you have a listen-based summary.

Show full transcript
If the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at a certain point the money stops working for you. Within the next 10 years, we'll have AI systems that could outperform all humans and along with that they'll outperform all humans in conducting cyber attacks or developing new weapons. The transparency letter, it wasn't making any claim about how the restructuring should go, but it was just asking for kind of more clarity from OpenAI. OpenAI is willing to go to great lengths to silence critics when they think that it's important to do so. It wanted to know every single person who ever donated to us and the date and amount of that donation. It is a matter of fact that they didn't succeed in really slowing us down. In fact, I think that they kind of made a mistake here where this is a bad comm's moment for them in absence of like really strong technical solutions to the problems AI faces or governance solutions where it's like here are the exact controls and committees and evaluations you need in place. We should at least know what's happening. We should at least not be walking to the cliff blindfolded. >> Welcome to the Future of Life Institute podcast. My name is Gus Ducker and I'm here with Tyler Johnston who is the executive director of the Midas project. Tyler, welcome to the podcast. >> Thank you so much, Gus. It's great to be here. >> Great. All right. Why don't you start by telling us about the Midas Project, its its mission, its history, and so on. >> Yeah, so the Midas Project is a watchdog nonprofit focused on frontier AI developers. I started the Midas Project about a year and a half ago now. Um, I was transitioning from working in animal rights where I did corporate accountability work and had been following AI since, you know, 2019 when I was playing with like Jukebox and the GPT3 like API when that came out and thought it was like so exciting that AI models could generate coherent paragraphs of text. And then yeah, starting in 2023 when GPD4 came out, I started to feel the acceleration a bit and I started to get a bit worried that our society wasn't prepared for what was coming and uh the companies themselves weren't prepared for what was coming. I think many of them would admit, I don't know if they're so keen to admit it now. >> And I thought that some of the same corporate accountability uh playbook that, you know, I was using in the animal rights space to try to encourage stronger self-governance among like food companies in terms of how they treat their animals. I thought similar things could be effective when it came to uh using public communications and advocacy to ask AI companies to adopt stronger voluntary safeguards. So that was >> and what what is what is that playbook? Could you walk us through your thinking here? Why did you expect that to be successful? >> Yeah. Um at the core it's an incentives question. uh in an industry where companies respond to the incentives created by public opinion, you can have a lot of leverage by just taking a flashlight and shining it around to like uh show customers kind of what's happening behind the scenes in whether it's like the supply chain that the company source sourcing its materials and kind of inputs from >> or in the case of AI I think whether it's like the sort of externalities that are being generated even the ones that haven't materialized quite yet and you know what the companies themselves believe about what the technology could do to our society, you know. So, I mentioned animal rights is the case I know best. I think there there's this like very neat thing you can do where you can go to a company and uh if they're, you know, selling eggs on a store shelf, it's like a very pristine clean environment and there's this like metaphorical curtain that is like hiding that experience from like the pretty grizzly experience that animals are facing in their supply chain. And through public communications, you can essentially like pull that curtain open and show customers kind of the whole picture. And customers I think rightly find it a bit outrageous. And the company will respond to that incentive by in the case of you know animals adopting we were asking companies to adopt commitments to source cage-free eggs or improve the treatment of broiler chickens that are raised for meat in their supply chain. >> Mhm. But you know similar tactics have been used in the environmental movement in and and in other movements and I think basically any industry where there are these kind of negative externalities or these kind of ugly things that are happening behind the scenes or like with a bit of like causal distance between the product and the bad thing. And if it's a competitive market where people care about what customers think of them or if it's a area where regulators are active and want to do something and they care about what the public thinks, um, you kind of have this like powerful leverage as an actor who is just doing communications and public advocacy to move companies that are thousands of times larger than you simply by shining the flashlight around. So, And so in the in the case of animal welfare or animal agriculture, animal factory farming, you have kind of concrete externalities or concrete cases of of harm and perhaps you're beginning to see that in the AI space, but what are you what are you pointing at? What are you shining the the flashlight on? Yeah, that's it is a good question and I think you can shine the flashlight even on say speculative or currently immaterial harms >> and one way to do that uh is to just point out that experts in the field and many of the people at these very companies >> will freely admit to their credit I should add like to their immense credit they will freely admit that these harms are quite realize that surp we'll have AI systems that could outperform all humans and that along with that they'll outperform all humans in conducting cyber attacks or developing new weapons and so um you know that's that's one of the things you can shine a light on. >> Yeah. Do you worry there uh about creating the wrong incentives? So if you have a company that's disclosing their worries about the this emerging tech to the public, they they are acting responsibly but they are also giving ammo to an organization like the Midas project which which could come back to to haunt them. So do you worry about people and specifically leaders at AI companies um you know keeping their worries to themselves as to not give give ammo to to a project like this? >> Yeah, I do worry about it and I should also add I brought that up as a hypothetical but I think that there are [clears throat] other >> frequently more useful things to shine a light on and those can include the more concrete failures that are indicative of problems we'll face in the future. So for instance, the alignment failures that recent models have had, you know, like there were visible ones at XAI around like Mecca Hitler when that was going around the internet. With cases like that, you are less susceptible to this problem, but you're still susceptible to it because you only find some of those cases when you go looking for them. And so you also don't want to punish the companies that are doing all the uh the looking for the misalignment cases as opposed to uh accidentally experiencing them in the case of XAI. >> So yeah, I do think it's a problem. I think like the best way to address it is just to like try to be intentional about target selection where you're thinking about when we are what when we're choosing what company to focus on are we choosing them because uh [clears throat] they are the most honest about the risks here or because they're maybe the most responsible for the risks or or even worse trying to like deflect the responsibility and and mislead people about the risks. So >> Mhm. Yeah, it seems like a good approach to take here if you're thinking about so in some sense the the rational response from the from the companies here would be to to close ranks and to perhaps have strong legal mechanisms for for no information getting out and you know prevent their employees and their CEOs from from doing interviews and so on. Do you think that's happening? Do you think this this incentive is is actually kind of materializing in the world? >> I think that might be happening for many reasons. Um, yeah, it it does seem to me that the industry is getting kind of more and more locked down and wants to be more and more careful about even like internal paper trails. Like I know there's recently a case of you know in an intellectual property lawsuit against OpenAI I believe the plaintiffs have gotten access to all the internal Slack messages at OpenAI and are going to try to use that for evidence to support their case. And so I think they're going to be more locked down externally. I think they're going to be more locked down internally where they're thinking uh you know what should we be telling like what conversation should we be having on Slack? should maybe for valid security reasons, what do employees need to know and what do they not need to know and how can we uh make sure they only hear the former category. >> In some instances, for security reasons, I think that might be advisable, but for many uh in many instances, I think it poses a big risk uh in terms of what the public knows and what what they don't know. And I think there's some very lowhanging fruit around transparency that uh it would be really good if the companies would kind of draw a line where they're going to disclose you know whether they're training on chain of thought where they're going to disclose the sort of evaluations that they're using to test their models and the results of those evaluations. And something we'll talk more about, I'm sure, is also like the the governance structures they have in place that, you know, how can a company make a credible case that there are internal structures in place that will catch something dangerous before it's too late. >> And so I think for like questions like that, more transparency is extremely important and there's a very strong case for it. >> Yeah. One interesting appro, one interesting aspect of the approach you're taking here is the fact that that you might be able to affect change with, you know, 1,000s 1,000 or or even less of the resources of of the companies. And so, how do you how do you think of that? because it seems like if you were to go uh against the companies in a kind of head-to-head battle on on resources, say trying to lobby or something, that might be a tactic where you're where you're bound to lose just because these companies have basically infinite resources. So yeah, what can you tell us about power differentials and kind of using your uh using the the leverage you have even if you don't have as many resources? >> Yeah. Um I think that groups like the Midas project are frequently in an insanely leveraged position where you can get a lot more traction than you expect for the size. Um I mean one example that uh I think of in the animal rights space to remind myself that this is not kind of an even playing field. >> The organization I used to work for which was the Humane League is the name. They have been pretty successful along with some partner organizations in kind of shifting the entire corporate supply chain for eggs. I think it's gone from like 3% cage-free in 2015 to like 50% in 2025. >> Their annual budget I think was like$10 to20 million. The annual marketing budget alone for Walmart is $9 billion or something [laughter] like that. So it's like even to like change one company, it looks like the odds are totally stacked against you. Not to mention to like change an entire industry. >> Mhm. >> Um I think the reason that it works is because you have this immense intangible asset in the fact that like about many of these issues, you're fundamentally right. the evidence for you being right is there and the public is kind of already on your side. And so, you know, I think you even like mentioned off-handedly like lobbying is an example of a case where you don't just like want to lose by being outspent. Even in the case of lobbying, if you know, if you're going up against an insanely wellunded industry that is lobbying against regulations, for a technology where there's a common sense case for regulation and where most of the public is on your side, >> it's not obvious to me that the industry wins. I think that the intangible asset of like p strong public buyin, a strong common sense case to be made for what you're asking for [clears throat] is in some ways a more important asset than infinite money. You know, you you could have all the material resources in the world, but at a certain point you can't >> I if if the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at a certain point the money stops working for you. So, I think that's why, you know, I don't think I could go up against a huge company like OpenAI and say, "Hey, you should change your logo to be yellow. I I don't like that it's black and white. I think it should be yellow." Because like, you know, the public doesn't care. There's no buy in, and they could very easily quash whatever niche interest I have. But if I'm making the case for, hey, you should do this common sense thing that all like the majority of the public, including the majority of your own customers, think you should do that most of them just aren't thinking about right now because they haven't been tracking the issue. And once they are tracking the issue, they'll be pretty upset about the fact that this this was even a problem in the first place, that this is something you haven't done already, >> then, you know, you're you're in a pretty leveraged position to make a case for what you want. >> Yeah. Yeah. Let's talk about some of the projects that you've been engaged at engaged in at at the Midas project. So you have the OpenAI files and you have an open letter to to OpenAI. Could you talk about both of those? Yeah. So uh this has been our main focus for the better part of this year >> and [clears throat] this is because OpenAI throughout the year has been undergoing this restructuring that there's already a case for them being you know the most important AI company for uh advocates to frame their messaging around because they're synonymous with AI for many people thanks to ChatGpt and them kind of you know I think if you look at like Google trends chat GPT just dominates all of the other AI AI products and so when you're communicating to the public, open AI is is kind of who they're thinking of. >> And this restructuring was taking place that I I think uh I thought and I still think is kind of the biggest story in AI right now. Mhm. >> And you know, we asked ourselves, what information is not out there right now or to the extent that it's out there, it's under discussed and under indexed that could that that is relevant to this restructuring to contextualizing it and to potentially generating a better outcome because it lets the public know what they should be worried about and what they should be fighting for or because it lets regulators know uh you know the like full context of this organization. >> [clears throat] >> And that was the motivation for the open files which was a kind of web native report that was maybe 14,000 words. It was very long and it was for the most part an archival project. There's relatively few kind of new bits of information in there, although I think there are a few. And for the most part, it's summarizing kind of stories about OpenAI and quotes from OpenAI themselves and from employees who worked at OpenAI and various concerns we had about their governance and their safety decisions and the integrity of their leadership that had surfaced over the past decade. Mhm. >> And [clears throat] the reason it seemed important is because there are a lot of these examples of kind of concerning individual events that happened or concerning things that one person said >> and take it in isolation, which is how you would normally encounter those stories. It's easy to kind of forget about it and think like, oh well, it's a relatively small thing. They messed up there, but they fixed it and it's better now. Or, you know, okay, well, this one person has a grudge against them. like who who's to say whether this actually demonstrates a pattern of uh kind of governance failures and so you know I thought there was a really strong case to collect all of it in one place and kind of create a narrative and say take it in its totality uh what does this mean in terms of like should we by default trust this organization >> to govern itself well or do we have to be a little bit more critical of the choices they're making. So that was the motivation for the OpenAI files and the transparency letter which as you mentioned it came >> I think 2 months after the open AI files came out. Uh this was really honing in specifically on the restructuring and it was a letter that it's now had uh 10,000 plus signatures including a number of former OpenAI affiliates, leaders in the field of AI, dozens of civil society organizations and just thousands of kind of members of the public. And it was a letter that I think was pretty simple and common sense. It wasn't making any claim about how the restructuring should go, but it was just asking for kind of more clarity from OpenAI. Um, there were seven questions we asked and I think these were questions where the the answer was very high stakes. It was a very important thing for the public to be clear on and I had a sense from messaging that they were using that whether intentionally or not sometimes they were obiscating uh the kind of the truth about that question or like I had some reasons to believe that the answer wasn't as good as they were portraying in public and so this was a request for them to just kind of go on the record and say plainly for each of our seven questions what what the outcome would be if they got their way in the restructuring. >> Mhm. [clears throat] Mhm. Do you have a sense of whether OpenAI is more chaotic internally than uh your average startup? So, this is a defense I've heard some people give that if you look at a startup, the marketing is sleek, but it's complete chaos uh chaos internally. And OpenAI is just no different from that. What's in the OpenAI files that are perhaps more damning than than just what's happening at a at a regular startup? >> Yeah. It's an interesting question. I have never researched a regular startup as thoroughly as I've researched open AI. And so I could be subject to some bias here where I do think that open AI is unique. And in fact, if you subject any organization to this level of scrutiny, they'll all come out looking this way. >> I don't think it's the case. And I think Sam Alman himself has said things to the effect of, you know, we're doing something extraordinary here. And as we get closer to this moment where we develop AGI say the the stakes are just going to get higher and higher and the conflicts are going to increase and on all sides everyone who has a view on how this should go is going to kind of become more and more aggressive about pursuing their vision. And so I think that like just from the outside before you even considered the specific mistakes that I I believe OpenAI has made, if you were to ask yourself, do you expect more kind of conflict and risktaking and prioritizing moving quickly over moving safely? Do you do you expect more of that from like a B2B SAS startup that's just like making some kind of trivial piece of enterprise software or do you expect it from the company that like genuinely believes that they're going to automate all human labor? >> I think you should expect more of that from the uh kind of hyperscaler startups like OpenAI and Anthropic and XAI and others. In terms of what we actually found, I think, you know, I mentioned we wanted to put all these examples in in conversation with each other to like review the totality of them. And I think you see like patterns emerging that indicate that, for example, OpenAI is willing to go to great lengths to silence critics when they think that it's important to do so. and you know treating these people as like real people. I don't think that like they're trying to silence critics for some mochi reason nor because they have like a power fantasy or anything. I think they may very well be motivated by like mission related reasons where [clears throat] they really want AI to go well. Uh but they may also think well it's much more important that I do it to make sure it goes well than that my competitors do it. It's much more important that we do it versus our national adversaries doing it. And for that reason, things that could slow us down instrumentally, like criticism from former employees or from external critics, is something that we should really try to clamp down on. >> Mhm. Yeah. Let's uh let's dig into the subpoenas. So, tell me about what you've received from OpenAI. >> Yeah. So, I received a subpoena from OpenAI. I received two subpoenas, actually. >> Um, this was in August. I I wasn't home at the time, but I got a text from my roommate saying that, you know, someone's at the door with papers, and there's a bit of back and forth to eventually get a hold of them. But when I did, uh, it was two subpoenas, one directed to the Midas project and one directed to me personally. And they, we can get into everything they asked for. There were 11 requests for production, but the context for this is plainly about the restructuring. Um as mentioned our organization has been speaking about out about the restructuring. Uh dozens of other organizations have been speaking out about the restructuring and similarly Elon Musk is speaking out about it and going further than that he's you know taken them to court about it. And so the subpoena was in Elon Musk's case against OpenAI or maybe more precisely it was related to OpenAI's counter claim against Elon Musk. So after he took them to court suing them for undergoing this restructuring, which I think you could have different views on, um, if you want to be charitable, you could say, you know, he was co-founded the nonprofit, donated a bunch, clearly he has a stake. If you wanted to be uncharitable, you could say that he's running a competitor that's not a nonprofit that would benefit from OpenAI being slowed down. >> Uh, [clears throat] what whatever the uh, whatever view you want to have of him, he's suing them over it. And they're counter suing, saying that he's waging a harassment campaign about this. So the subpoenas were ostensibly related to this case and one of the main things they ask for which I think is basically fine and acceptable is whether he has supported us, whether he was involved in the formation of the organization or has donated to the organization. >> Um, you know, I [clears throat] would have rather they like asked me in a friendly way, but I guess I could understand why if I was like a bad actor, maybe I wouldn't answer that honestly or something. And so they want they want me to like swear to the court. And I'm happy to swear to the court, by the way, that Elon Musk has no connection to the organization, has never donated. We would not accept a donation if he if he tried. And so that that's ostensibly that that was the context for the subpoena. And I think the reasonable thing they asked for. The thing that I think went beyond what was reasonable, which we can get into, is the scope of what they asked for, they went beyond just asking if Elon Musk is involved, and the context and the breadth of the recipients of the subpoena. >> So, um, addressing those kind of in order, the the thing that concerns me about actually I'll address them in reverse order because this is relevant. So, first for the context and of who got the subpoena and when, um, as far as the groups that have gone public now, I think, uh, all of the groups that I know of that have been subpoenaed were signitories on that transparency letter that I mentioned. And, you know, at least our subpoena came in a few weeks after we published that. I know others came in at different times, but generally, these were all the organizations that were kind of speaking out about the restructuring. Some of those organizations, I think plausibly they could have looked supported by Musk and it could be relevant. Some of them really didn't. Like one of the subpoenas went to echo.org, which is, you know, this kind of massive grassroots organizing group that's been around for 20 years and has been criticizing Elon Musk very uh insistently for the you know, I don't know how long, but like especially in the past few years due to his role in government. And I think he's even like they, you know, take a shot at him in one of their petitions about OpenAI. And so it's [clears throat] surprising to me that they would have a genuine reason to suspect that an organization like Echko or the San Francisco Foundation is supported by Musk. Nonetheless, they subpoenaed all these groups. And then going beyond just asking if they were supported by Musk, they asked for, as I understand it, all documents and communications about OpenAI's governance and restructuring. This is very broad. they define what they mean by documents and communications and relating to in the subpoena itself, but it's really like anything tangentially connected to it. If I, you know, text a journalist talking about some component like the profit caps or something, like suddenly that whole conversation can get drawn into the subpoena. Um, given that our organization focused on it for the better part of the year, I think it would really would be thousands or tens of thousands of pages if I were to like actually scour through and find anything that touched on OpenAI's restructuring in any of our emails and text messages and documents. Then even beyond that, it wanted to know every single person who' ever donated to us and the date and amount of that donation. And it wanted to know any documents we had as the Midas project on OpenAI's investors or any for-profit entity that had considered an investment in OpenAI. And maybe most egregiously, this didn't apply to the Midas project, but applied to other groups, including Encode, who went public about this. It was also asking for documents about totally unrelated legislative battles that these organizations were in where OpenAI was to varying degrees on the other side. So SP 1047 and SP53 are examples of California bills where OpenAI, you know, fought one of them. The other one they say they didn't fight, but I think there's a case that they did and they've said some misleading things about the bill. And they asked for in code for example for all of their documents and communications about that legislative fight. >> Mhm. >> So these subpoenas really went beyond just asking about Musk and funding the only thing relevant to their counter claims. >> Yeah. Could could there be valid reasons for these subpoenas being as broad as they are? Is there some legal complexity that perhaps I don't understand here or why do you think they are as broad as they are? Yeah, I also am not the best person to give an opinion on this because I'm not a lawyer. Um, but I have read basically every public statement I could find from lawyers who have spoken about this on social media or in the news and also talked to a few privately. And you know, I think I've only heard one or two try to make the case defending this. And the case defending this is something like uh if you're like a bulldog litigator and you like really want to win this case, you have to collect all sorts of evidence. And if you know if it turned out that one of these groups was supported by Musk and you you know could also strengthen the harassment claim through some random email about OpenAI's restructuring that happened 4 months ago, then you have to go get all that information. And to the extent that it's burdensome or unreasonable, uh, then it's incumbent upon a group like the Midas Project to fight back on that and negotiate some narrower scope or move to quash it in front of the judge or something like that. I've only heard one or two litigators try to make that case. Most of what I've heard has been that this is a pretty unreasonable scope. That this is something that looks like perhaps an intimidation tactic, a way of, you know, sending people to I think in my case was like a private investigator that came to our door. In other cases, like a sheriff deputy. Uh these are, you know, it's it's not uh it's somewhat standard in the field to like find people like this who serve as process servers to deliver the documents. But of course, it's a little bit scary and intimidating to get these documents saying you're commanded to produce all of this in two weeks in perfect form by order of the court or whatever. And so, you know, there's some people who think it looks like intimidation. I sometimes think it looks like kind of intelligence gathering, right? If you want to know what's being said to people in these legislative battles, what's being said to people in the restructuring, for instance, like what congressional offices the Midas project has talked to, what journalists we've talked to, whether we ever slipped up and said something like incorrect that OpenAI could go after us for this would be a great opportunity to do that. And maybe one other hypothesis is that it was an opportunity to slow us down because it, you know, throw sand in the gears. if you were to actually comply with it, it would take, you know, it would I was at the time we received it the only full-time employee at the Midas project. So, you know, if I was doing all of it, it would have taken, you know, a month or two of work and maybe just try to throw sand in the gears during these critical weeks before the restructuring was approved, which in fact had it turned out to be, you know, the six weeks before the final approval came through. So, >> yeah, say more about that. How burdensome is it for the Midas Project or an organization like the Midas Project to receive subpoenas like this? Because you mentioned you you need to produce all kinds of information in in perfect form and so on. Is is that possible do you think? Yeah. How how how burdensome is this? >> It would have been really challenging. I think the legal requirement is something and you know I'm speaking about like legal questions and I'm not a lawyer so I should like caveat that you should take this with a grain of salt. Um I think the you know requirement is that you have to give like your best faith effort or something >> and I do think that like my best faith effort for producing all documents about OpenAI's govern infrastructure after you know spending months producing that you know 14,000word report and uh gathering thousands of signatures for our letter and doing stuff like that. I think it could have been a a huge task to produce it all. Of course, like you know, if you're wor if you're wondering how burdensome is it in the real world, we didn't end up having to do any of this. We didn't end up having to produce a single document response to the subpoena. Um, and I don't know the extent to which, you know, smaller nonprofits in our class actually do get like scared into doing all of this because they read the documents saying they're commanded to or they get, you know, good legal counsel that advises them that you can fight back and avoid this. So in practice, maybe it's it's not as burdensome as it looks on paper, but it is given the breadth of everything they asked for, it it looked very much like a fishing expedition to try to gather all the intel they could. >> And so how did you respond to these subpoenas and how are there some lessons for this? Is it just my insertion here is that you should just immediately get the best lawyer you can afford and and that's the first step. But yeah, tell me about this. >> I don't know if my case is particularly representative. uh I didn't get the best lawyer we could afford. Uh I have been you know working with the same uh lawyer we have had since we started the organization who I knew from the animal rights movement and our case was a little bit unique. She basically spotted what she thought was an error in OpenAI subpoena saying something like uh you know I think she like talked to Open AAI and said something like you should have gotten this issued by an Oklahoma court which is where I live. Uh therefore we don't think this is enforceable. >> Obviously if you wanted to you could go and get an enforceable subpoena tomorrow and serve us the exact same subpoena. Uh, and I think she said something like, "If you do this, we will tell you no to all the questions about Elon Musk because those are the reasonable questions and that there's just nothing there. We'll be happy to tell you that." And then for everything else, we will move to quash, which is like, you know, taking it before the judge and the judge would see it and have to decide whether to compel us to produce it or far more likely uh tell off Open AI for the kind of immense breath of the subpoena. And in fact, this judge in this case had already told off OpenAI once before for their kind of abuse of the discovery process. This was not related to the nonprofits they subpoenaed, but I think rather to their subpoenas about meta, I believe. But, you know, it wouldn't have looked good for them to go before the judge again and have this happen again. >> Mhm. >> So, you know, I think we told the open AAI lawyer that and then I don't think we ever heard from them again. [laughter] Yeah. The way it resolved it was like couple follow-up emails, no response. We understand ourselves to be free of the obligation now. If you don't respond, no response. So, >> yeah. >> What lessons do you take from this? We we talked about how a a project like the Midas project is in a lever leveraged position and can challenge much larger organizations. Is this is this then where that reverses and you know if you have a team of hundreds of lawyers you can you can you can basically stall small organizations that are that are working to make you more transparent. >> I don't think so. I think that it is a matter of fact that they didn't succeed in really slowing us down. Uh in fact I think that they kind of made a mistake here where this is a bad comm. It's a bit of like a maskoff moment where when it was made public, there's a lot of interest. You know, the San Francisco Standard wrote about it, NBC News wrote about it, The Verge wrote about it, >> and it looked pretty bad and they didn't have a good answer to it. You know, there was this funny moment on Twitter where I think Nathan Calvin at Encode uh first had a very viral tweet kind of detailing his experiences about the subpoena and Jason Quan, OpenAI's chief strategy officer, he had a long thread responding to it, saying basically something like the situation is not as simple as Nathan's making it out to be because even though we sent this subpoena, what Nathan didn't mention is that in code his organization actually submitted an amicus brief in support of Elon Musk's case. And for that reason, subpoenas are to be expected. And you know, once you've inserted yourself into a case like this, you could expect to receive these documents. One reason that this doesn't work is because like it doesn't explain why the subpoena mentioned things like SP53, which are just totally unrelated to the case. Another reason it doesn't work is because the subpoena went to other groups, including my own, that never touched the the the case with Elon Musk. And then a third reason that I should mention about why this argument doesn't work is that from experts I heard from, it's actually not normal to start subpoenaing nonprofit organizations that submit an amicus brief in support of kind of a narrow claim of like is the restructuring bad for the public or something like that. This is like a pretty aggressive thing to do. So, and in fact, I think there was like already a financial declaration in the amicus itself saying that they received no financial support for the production of the of the document. So, >> um, >> you know, I don't think it worked for OpenAI. I don't think that they got what they wanted out of it. And I think it like backfired for them in, uh, gen real ways. You know, among the employees who I don't think, uh, fell for kind of like, I think Jason Quan's poor arguments about this, uh, you know, there's a tweet Twitter thread from uh, Josh Aheim, the head of mission alignment at OpenAI, who to his immense credit, was honest about it, where he said, I think at great wish risked his career. He said that this doesn't look good and that OpenAI should be striving to avoid even the appearance of the misuse of power, not to mention the actual misuse of power. >> And like, you know, whether or not you think that this was a misuse of power, it certainly looks like one. We can tell that. And so, yeah, I don't think it worked for them. I think it remains the case that it would be hard for them to use 100 lawyers to slow down kind of scrappy organizations working in the public interest. Mhm. >> One thing I'll mention is that, you know, people could also ask, well, what if they just like did a baseless lawsuit to like drown you in legal fees or something like that, >> which is something that like I think about, in fact, after this whole subpoena incident, we were trying to get a uh an insurance policy for media liability to make sure that if I'm, you know, going on a podcast like this and I slip up and I say something uh wrong about OpenAI that I got wrong to make sure that like we are covered in case they, you know, take us to court for defamation or whatever. And every in every every insurer that our broker reached out to said no about a policy for us, you know, I think implicitly at any price with multiple of them citing the article in the San Francisco Standard about our subpoena. >> So, you know, one way that like the subpoena could have chilled speech is that like it maybe made us unsurable. And most, you know, nonprofit managers or like managers at journalistic entities in absence of a policy like that would tell you to be like way more conservative with what you do say in public and stuff. So someone could ask, okay, well, you know, you don't have any insurance policy. Open AI could just take you to court if for even baseless things, even if you're totally uh totally in the right. Uh could that just drown you in legal fees and thus could they win? I don't think it's true. And part of that is just like unique to the United States as far as I know. Actually, I don't know to what extent unique to the United States. I only know about the US context, but I think there's, you know, a pretty strong protection for the first amendment, particularly when it's in the public interest in the United States. And there are like statutes, I believe, called like anti-SLAP laws where SLAP is SLAP, strategic lawsuits against public participation. So, you know, in some ways, like to the extent that OpenAI wanted to uh be even more latigious against organizations speaking out against it in the future, I don't I think it would backfire for them again. I think that like to the extent that those organizations are making reasonable, careful claims, they could find themselves in another situation where their attempt to be legally aggressive ends up kind of backfiring at a maskoff moment for them and they have to do damage control on the comm side and stuff. >> Yeah. What what's the positive case for more transparency? How could this benefit OpenAI and uh benefit the world in in kind of a a positive upwards spiral? Is Yeah. How do you how do you see the the the future of of transparency in the in the AI industry? Yeah. The big question that we as a society have is how we actually create the rules and the structures and the controls that prevent bad outcomes here and enable us to like the great futures that AI could bring us to without kind of all that potential being curtailed due to >> the various things that can go wrong with the technology that the people building it themselves admit is a huge risk and you know they'll sometimes give probabilities that I think we wouldn't accept with any other technology. You know, you wouldn't get in an airplane with a 10% chance of crashing. You wouldn't cross a bridge with that chance. And so, when people building the technology say, "Yeah, there's a 10% chance this goes really wrong in a catastrophic way like no other technology has gone wrong." It's a huge challenge for us as a society to think of like, "How do we implement the rules needed to address this?" And unfortunately, I I wish I had better news here, but and I wonder if you share this perception. I've not heard a single answer to that that is satisfying to me. You know, no one no one has the prescriptive solution where they're like, "Here are the exact rules and controls we'll put in place to make sure that like we can avert these risks." You know, there are arguments that I hear that I find compelling that like we are just so far away from the understanding of the technology that like it's as if we're, you know, alchemists trying to like think about like how to uh convert lead into gold or something, right? And so in this I think transparency is especially important in a world that we find ourselves in like this one where when you don't actually know what the solution is. At the very least, you want to have the monitoring capability and the visibility to know when things are getting serious and when the models are getting really powerful and when the kind of when there are warning shots that maybe make us want to put our foot on the brakes or sit down in some sort of, you know, international body and and talk through some sort of treaty to to determine a safe future for this. And so yeah, in absence of like really strong technical solutions to the problems AI faces or governance solutions where it's like here are the exact controls and committees and evaluations you need in place and the mitigations and stuff. Transparency is just like we should at least know what's happening. We should at least not be walking to the cliff blindfolded, but we should like there should be an obligation for companies to be uh you know discussing how capable their internal models are and to have some red lines in place. So that when those models hit critical thresholds, we we we as a society know and I think it'll generate, you know, transparency could generate more political will to to invest more money and more time and more kind of good faith effort in finding solutions that will that will work. >> Yeah, I mean we have the technical side of technical AI safety where it's at a minimum it's it's that's it's difficult to solve the alignment problem. we it's difficult to to set up various technical kind of half solutions that in combination could constrain AI in the ways we want. We also don't the I I've I've talked to many guests about interesting governance schemes and it seems that these are quite advanced and would take a long time to actually implement in the world. And so if we're talking about what we're working with today, we we're working mainly with the US legal system and the the corporation as a structure, whether that's a public benefit corporation or or a normal a normal corporation. Um but we're we're kind of working within the constraints of a system that's that takes a long time to change. And so in that environment [snorts] I think the argument for transparency is is very strong that we need to this is this is like the the this is what we need at a minimum we need to understand what's going on in order to to respond to it. Do you are you hopeful that we will implement new governance techniques and that we will of innovate in the governance space as we have innovated in the technical space and here I'm thinking just on the capabilities front we are seem to be way ahead on capabilities in terms of where we are on governance >> the honest answer is I am like hopeful against my best judgment [laughter] I think that like it really needs to happen but you know I've spent some time thinking about analogous past technologies that have been new and have required kind of new forms of governance or rule making or social response. And I think we primarily respond to these things reactively rather than proactively. Mhm. >> And so I expect that we are just going to be continuing to treat this in a business as usual way until uh until some moment where something major goes wrong or there is a very compelling demonstration of the capability of these models that will lead to some sort of regulatory response that will either be kind of judicious and responsible and kind of well measured or it won't be. And the extent to which it won't be, I could see it going in multiple directions. Like it might be the same mistake we made with, in my opinion, nuclear power where we just, you know, ban it too early. It might be the same or or on the flip side, you know, it might be that we don't do enough even in a reactive posture because we're worried about an international race for example and then then it's like the same mistake we made with, you know, nuclear weapons where we just exposed ourselves to an immense and unacceptable amount of risk throughout the 20th century because we kind of couldn't get everything screwed on straight in terms of orienting with the international community and finding some sort of solution to make depoliferation or kind of global cooperation, the the forefront of like the nuclear weapons regime. So >> yeah, I expect that the same will be true for AI and I think that there probably is a good solution out there in terms of what the governance strategies are whether it's the government rulemaking or you know the corporate structures or whether this is happening in corporations at all or whether it's happening in some sort of CERN for AI or something. >> But I don't think that the odds are good. I think that we're going to have to like really fight to to make that a reality. >> Yeah. One pretty advanced system we have for dealing with risk is the insurance industry. So we have professionals that are quite capable of of pricing risk and um yeah we have we we know how to do this in in many industries. Is there hope that we might might apply what we know about how insurance work to the AI industry? um genuine question I have and I don't know if you like have an answer to this but do do we have evidence to think that the insurance industry is good at pricing risk when there isn't like a history of the like frequency or severity of that risk >> like um you know it of course the insurance industry is probably like good at like pricing uh hurricane coverage or something because like there's just been enough hurricanes that they know >> um >> it's not obvious to me that like they have the information they need to price risks from AI or like especially if they are convinced as I think many reasonable people are that there's immense downside and like uh a greater than 1% chance of that immense downside being realized. Maybe you end up in a situation where like the proper pricing just looks ridiculous because there's like this outlier downside that is like >> uh kind of like dominating the speculative equation. So >> yeah, >> I don't know. It's not clear to me that the success of like insurance and pricing other risks actually applies to something as like one of one as AI. >> Yeah, I mean into they must know about you know black swan events and you know tail risks of of course kind of this is this is standard part of of the math behind insurance. But it's it's true that if I guess if if you were to to price the risk of of AI in in in kind of quantitative terms, it might look just, you know, like something that can't work in the real world because then the industry wouldn't wouldn't [clears throat] work. And that if that's the case, that tells you something about the AI industry as it's as it's currently functioning. I guess the question is whether we have a better alternative uh compared to the insurance industry and in and for regular rigorously thinking about the um the risk of AI. >> Yeah, I'm not sure. >> Yeah, I'm I'm not sure either. All right, so we have different options for trying to deal with this problem. We can try to deal with it technically. We could try to deal with it in terms of governance. we can try to uh think about transparency and kind of pushing the companies in in the right direction from the outside. What's the government's role in all of this? Is there a uh say when transparency we we find best practices by trying to push the companies from the outside? Is there a point at which um transparency should be incorporated into the law? And so you would have um kind of legal processes or legal frameworks for how transparent an AI company should be. >> Yeah. Um [clears throat] I think this is absolutely necessary. Uh the Midas project has operated with more of a focus on just trying to get companies to do more under kind of self-regulation regime. And that's for a few reasons, but like maybe the most important is that that's just mostly kind of where we live today. Although, you know, I know SP53 and the EUI act have kind of been like the first I think really significant steps toward kind of codified transparency obligations. Another reason we focused on this like kind of self-regulation is that I think there are just fewer people who are really focused on this and in particular trying to make effective self-regulation happen than there are people trying to make government regulation happen. >> But it doesn't surprise me that there's that imbalance because government regulation is just stronger uh in so far as it covers everybody immediately into perpetuity. I am, you know, kept awake at night by all the flaws of the self-regulation regime. uh the least of which is that it can just be thrown away at any point that a company making a transparency commitment can decide when they see the results of their most recent model or you know say they have a new model that they want to deploy internally as an automated AI researcher they can weigh the cost and benefits and they can say actually even though we promised to be transparent about the existence of this model and its capabilities now that was just a promise and now >> it's way too costly for us to >> fulfill that. So, >> yeah, I do I do think we have to worry about a sort of safety tax where if you're in a in an environment where there's a lot of funding and things are going well and perhaps your models aren't that dangerous yet, well, then you can you can spend capital, social, and monetary capital on on projects that are not directly related to making your models better. If the race tightens and you are behind, perhaps you throw all that to the wayside and you just focus on focus on racing ahead. And so it's it's true that yeah, the worry is that the self-regulation regime just um be gets pushed out as more important things so to speak uh arises. >> Yeah. Um, yeah, I I think that's exactly right and it makes a strong case for government regulation, but it also demonstrates the fact that the government regulation needs to have like real implementation and enforcement uh strategies attached to it. It can't just be like a nice in theory like oh you'll let us know at your discretion about these things through your contact at the agency or something because then the exact same problem applies. And so like yeah, I think it also teaches us a lesson that the regulatory solution if and when that comes needs to actually like uh have some teeth to it and and create ways to monitor the behavior and activities of these companies in like nonfalsifiable ways or ways that the companies can't just get around by skirting regulation which >> I think many of them are willing to do when the benefits outweigh the costs in their assessment. >> [snorts] >> How would you rate the transparency of the current say US AI industry perhaps compared to other industries? Do you where where are we on a scale from one to 100 would you say? >> Maybe it depends on if 100 is where other industries are at their best or if 100 is where the AI industry needs to be for things to go well >> because yeah I don't think there is a single industry that is really at the place the AI industry needs to be eventually. >> Yeah. So you're saying the standards for the AI industry should be much higher because the stakes are much higher perhaps than perhaps any other industry. >> Exactly. Yeah. So, you know, if you're comparing to other industries, I think you could maybe say that the AI industry does pretty well because there is this strong culture of kind of discussing this stuff openly, of publishing model cards, of >> you know, uh even like the competitive advantage stuff like it's always surprising to me that um was it that like the transformer architecture was kind of just like freely given away by Google. And yeah, a big part of this is that I think the AI industry is so maybe it's its academic origins or something, but like there's this huge culture of public uh you know, putting your papers up on archive and of releasing pretty detailed reports about your systems. So yeah, compared to other industries, I think it's pretty good. >> I think it's getting locked down as we've mentioned and I think it will continue to get more locked down over time and I think it's still pretty far from where we where we want it to be. >> Yeah. Yeah. How would you is how would we measure this this move? Uh so one way to do this is just talking to you who know a lot about transparency in the in in the industry and then perhaps interviewing you again in a couple of years and seeing where we are. Is there a way to make this quantitative? Is there a way for us to kind of measure the transparency or perhaps produce some kind of yeah report on on how transparency is moving uh up and up and down in in the industry? I'm not necessarily thinking that it would this would have to be something super naive like you know that the industry is now at 23 transparency and last year it was a 20 or whatever right I'm I'm just thinking is there a way to be more rigorous than just kind of informed impressions >> yeah to the extent that the goal is to be more rigorous about ensuring transparency I think that solutions like auditing are kind of the way forward. Uh I don't know if it would result in sort of like quantitative measures of transparency or anything like that, but I do think that it would, you know, by reading the reports of auditors over time and if they're, you know, truly third party and independent, their own assessment of their level of access to the companies, I think we would get a pretty good sense of like how meaningful the transparency being offered is. And I know this is a priority. You know, I think this is what it would look like for uh for regulation to have teeth in terms of making sure companies can't avoid it. And so I know it's a priority for people who are thinking about regulation and also for people who are uh you know trying to help the industry come up with better self-governance tools. So um but practically though I think that means like waiting till there is a good ecosystem of auditors which is kind of growing right now but I think is underdeveloped. and looking to them and you know hopefully they can speak freely about like to what extent they think they have the access they need or if they're struggling to get access. Uh there are some early you know concerns I have. I think I've seen, you know, I I know they're not exactly an auditor, but the group meter has done evaluations for companies in the past >> and I think I've seen that like in some system cards it's like noted that like their evaluations they get like seven days to do it and I think [laughter] sometimes they're even like yeah we feel pretty uncertain because we didn't get much time to do it. >> Yeah. >> Um and like that's the sort of thing I'd be looking for from an auditor to come to the belief that there is not enough transparency at the moment. Mhm. How helpful would you do you think AI is here? We're talking about processing a lot of information perhaps doing it very quickly. Uh, of course, people at organizations like yours could just use AI as as a as a helpful tool here. Is there a way for us to um use AI to create more transparency? perhaps some automated processes to look into into say documents that that AI companies have to produce publicly um to kind of search through and and see if there's anything you can surface that's that the public is perhaps not uh aware of just because it's buried in some some document somewhere. That's pretty interesting. Yeah, I hadn't thought about it much. When I when I imagine the benefits and the costs of that right now, it seems to me like maybe one reason that AI companies would be excited about about this being used would be actually as a way of, you know, the concern with transparency is that you give away competitive secrets or that you, you know, reduce national security through increased transparency about your models and how you're training them and their current capabilities and whatnot. And if you could have AI transparency auditors, >> um I you know I don't think they would want them to find like random documents hidden in a way that like are big red flags that the company wouldn't want to find. I think what they would want is like for that to be kind of identified and disclosed by a party that they can like ultimately trust. And that um you know in the same way I think model evaluations like the judges are frequently AIS and the reason to have the AI be the judge is like because it has some assumption of neutrality. you know, if you don't necessarily trust the incentives or goals of your auditor and you nonetheless have to give an incredible level of access to some party, maybe like an AI intermediary would excite AI companies. >> Um, I have like yeah, my gut reaction like I have some concerns related to like loss of control issues like do you really want to hand over [laughter] the uh transparency processes to the AI itself? >> So, yeah. >> Yeah. And of course it's also it seems almost comical to you know you can imagine like an instance of GBT5 investigating open AR or something. So you would perhaps need perhaps an open-source model or something that's that's more credibly neutral to engage in this process. It just seems that as we you know we these companies will get larger and larger they will produce more and more paperwork and much of that will be public because it has to be for legal reasons. And so perhaps there's something in there that is already public but that needs more attention. And yeah, this perhaps this is this is where AI could be useful, but I'm unsure about that. >> Yeah, it's an interesting idea. >> Yeah, I think we should we should end by talking about kind of your your hope for where we end up um if if the Midas project is is successful. Where are we in in in five or 10 years? what's the situation with regards to transparency in the industry and yeah how did the Midas project help? >> So maybe it's hard for me to answer in like a 5 to 10 year time horizon because of the uh weird beliefs I have about AI progress relative to the public but normal beliefs I have about it relative to the industry. Um so it's easier to answer on like a two-year time horizon and I think on like a two-year time horizon. I have a sense right now that a great deal of [snorts] discourse around AI isn't fundamentally bought in to the power of the technology and to the inadequacy of our current institutions to control and monitor that technology. And this is like discourse happening among people who are worried about it who write it off as you know stoastic parrot or something as well as people who are excited about it like you know effective accelerationists on Twitter who they're kind of like roleplaying this like prochemnology attitude without taking seriously like how important a dual use technology like this really would be and then even among the people who fundamentally buy that it's going to be powerful I think I uh I sense some undeserved trust in the institutions developing it where you know it's very easy to hear uh US Congress people talking about the fact that regulation would be a terrible idea right now because we just can't slow down industry relative to our national adversaries and like implicit in this is this assumption that the kind of free market model where like the companies are setting their own standards for the technology and operating in their own kind of dark corners of the industry and putting up their own walled old garden where they're keeping all of their activities to themselves. There's like an assumption that that's going to work for us, that that will actually lead us to developing this economically valuable, prosperous technology before other countries do or something. And so the fundamental goal that I have for the Midas project is that we can contribute to the public discourse in a way that both convinces the people who don't believe that this technology is a big deal that it really is a big deal and that there's evidence for this and there's like mounting uh evidence in the research being done on these models and that the companies themselves when they tell you it's a big deal, they're not doing that because they're trying to increase their market share or to the extent that that's like a benefit they're getting out of it, it's an unfortunate coincidence because they they're just right. their their reasoning is solid and they'll walk you through it and you can you know walk through it yourself and then even among the people who believe that it's going to be a big deal to convince them that the institutions are not prepared for this >> and some of them will admit that to you some of them will not admit that to you but look at their track record and you'll be able to see that they're not prepared for this and so hopefully you know in 2027 those two beliefs are kind of table stakes for anyone having a serious conversation about what we do about AI You have to know it's a big deal. You have to know that we're not prepared. And hopefully, yeah, the Midas Project's kind of investigative research and our our public communications will kind of help uh help tell this story to the extent it's true or to the extent that we're wrong about it. We'll we'll update. >> Great. That's a fantastic answer, Tyler. Thanks for chatting with me. Yeah, thank you guys.

Related conversations

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

AXRP

1 Dec 2024

Evan Hubinger on Model Organisms of Misalignment

This conversation examines technical alignment through Evan Hubinger on Model Organisms of Misalignment, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -6 · avg -7 · 120 segs

AXRP

11 Apr 2024

AI Control with Buck Shlegeris and Ryan Greenblatt

This conversation examines technical alignment through AI Control with Buck Shlegeris and Ryan Greenblatt, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -6 · avg -9 · 174 segs

Future of Life Institute Podcast

7 Jan 2026

How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

This conversation examines core safety through How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 85 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.