Signal Room / In focus

Back to Signal Room
The TED AI ShowCivilisational risk and strategyFeatured pick

What really went down at OpenAI and the future of regulation w/ Helen Toner

Why this matters

Governance capacity is now part of the technical safety stack; this episode helps translate risk into policy with implementation value.

Summary

This conversation examines governance through What really went down at OpenAI and the future of regulation w/ Helen Toner, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Perspective map

MixedGovernanceHigh confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 40 full-transcript segments: median -3 · mean -5 · spread -258 (p10–p90 -130) · 5% risk-forward, 95% mixed, 0% opportunity-forward slices.

Slice bands
40 slices · p10–p90 -130

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: high.

  • - Emphasizes governance
  • - Emphasizes policy
  • - Full transcript scored in 40 sequential slices (median slice -3).

Editor note

Anchor episode for the AI Safety Map: high signal, durable framing, and immediate relevance to leadership decisions.

ai-safetyted-ai-showgovernancepolicyintropublic-understanding

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video K6BvU4I5ANc · stored Apr 2, 2026 · 1,161 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/what-really-went-down-at-openai-and-the-future-of-regulation-w-helen-toner.json when you have a listen-based summary.

Show full transcript
hey belaval here this episode is a bit different today I'm interviewing Helen toner a researcher who works on AI regulation she's also a former board member at open AI in my interview with Helen she reveals for the first time what really went down at open AI late last year when the CEO Sam Alman was fired and she makes some pretty serious criticisms of him we've reached out to Sam for comments and if he responds we'll include that update at the end of the episode but first let's get to the show I'm baval sidu and this is the Ted AI show where we figure out how to live and thrive in a world where AI is changing everything the open AI Saga is still unfolding so let's get up to speed in case you missed it on a Friday in November 2023 the board of directors at open aai fired Sam Alman this ouster remained a top news item over that weekend with the board saying that he hadn't been quote consistently candid in his Communications unquote the Monday after Microsoft announced that they had hired Sam to head up their AI Department many openai employees rallied behind Sam and threatened to join him meanwhile openi announced an interim CEO and then a day later plot twist Sam was rehired at open aai several of the board members were removed or resigned and replaced since since then there's been a steady Fallout on May 15th 2024 just last week as of recording this episode open ai's Chief scientist Ilia setav formally resigned Not only was Ilia a member of the board that fired Sam he was also part of the super alignment team which focuses on mitigating the long-term risks of AI with a departure of another executive Yan Leica many of the original safety conscious folks in leadership positions have either departed open AI or moved on to other teams so what's going on here well openi started as a nonprofit in 2015 self-described as an artificial intelligence research company they had one mission to create AI for the good of humanity they wanted to approach AI responsibly to study the risks up close and to figure out how to minimize them this was going to be the company that showed us AI done right fast forward to November 17 23 the day Sam was fired open AI looked a bit different they had released DOI and shat GPT was taken the World by storm with Hefty investments from Microsoft it now seemed that open AI was in something of a tech arms race with Google the release of chat gbt prompted Google to scramble and release their own chatbot Bard overtime open aai became closed AI starting 2020 with the release of gpt3 open AI stopped sharing their code and I'm not saying that was a mistake there are good reasons for keeping your code private but open AI somehow changed drifting away from a mission-minded nonprofit with altruistic goals to a run-of-the-mill tech company shipping new products at an astronomical Pace this trajectory shows you just how powerful economic incentives can be there's a lot of money to be made in AI right now but it's also crucial that profit isn't the only Factor driving decision-making artificial general intelligence or AGI has the potential to be very very disruptive and that's where Helen toner comes in less than 2 weeks after opening ey fired and rehired Sam Alman Helen toner resigned from the board she was one of the board members who had voted to remove him and at the time she couldn't say much there was an internal investigation still ongoing and she was advised to keep mum and oh man she got so much flak for all of this looking at the news coverage and the tweets I got the impression she was this techno pessimist who was standing in the way of progress or a kind of maniacal power Seeker using safety policy as her cudgel but then I met Helen at this year's Ted conference and I got to hear her side of the story and it made me think a lot about the difference between governance and regulation to me the open AI Saga is all about AI board governance and incentives being misalign mind among some really smart people it also shows us why trusting tech companies to govern themselves may not always go beautifully which is why we need external rules and regulations it's a balance Helen's been thinking and writing about AI policy for about seven years she's the director of strategy at CET the center for security and emerging technology at Georgetown where she works with policy makers in DC about all sorts of AI issues welcome to the show hey good to be here so Helen a few weeks back at Ted in Vancouver I got the short version of what happened at open aai last year I'm wondering can you give us the long version as a quick refresher on sort of the context here the opening eye board was not a normal board it's not a normal company the board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's you know public good mission was primary was coming first over profits investor interests and other things but for years Sam had made it really difficult for the board to actually do that job by you know withholding information misrepresenting things that were happening at the company in some cases outright lying to the board you know at this point everyone always says like what give me give me some examples and I can't share all the examples but to give a sense of the kind of thing that I'm talking about it's things like you know when chat GPT came out November 2022 the board was not informed in advance about that we learned about chat gbt on Twitter Sam didn't inform the board that he owned the open AI startup fund even though he you know constantly was claiming to be an independent board member with no financial interest in the company on multiple occasions he gave us inaccurate information about the small number of formal safety processes that the company did have in place meaning that it was you know basically impossible for the board to know how well those safety processes were working or what might need to change and then you know a last example that I can share because it's been very widely reported relates to this paper that I wrote which has been you know I think way overplayed in the press for listeners who didn't follow this in the Press Helen had co-written a research paper last fall intended for policy makers I'm not going to get into the details but what you need to know is that Sam Alman wasn't happy about it it seemed like Helen's paper was critical of open Ai and more positive about one of their competitors anthropic it was also published right when the Federal Trade Commission was investigating open AI about the data used to build its generative AI products essentially open aai was getting a lot of heat and scrutiny all at once the way that played into what happened in November is is pretty simple it had nothing to do with the substance of this paper the problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board so it was another example that just like really damaged our ability to trust him and and actually only happened in late October last year when we were already already talking pretty seriously about whether we needed to fire him and so you know there's kind of more individual examples and for any individual case Sam could always come up with some kind of like innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever but the you know the end effect was that after years of this kind of thing all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us and that's a completely unworkable place to be in as a board especially a board that is supposed to be providing independent oversight over the company not just like you know helping the CEO to to raise more money um you know not not trusting the word of of the CEO who is your main conduit to the company your main source of information about the company it's just like totally totally impossible so um that was kind of the background that the State of Affairs coming into last fall and we had been you know working at the board level as best we could to set up better structures processes all that kind of thing to try and you know improve these issues that we've been having at the board level but then in mostly in October of last year we had this series of conversations with um these Executives where the two of them suddenly started telling us about their own experiences with Sam which they hadn't felt comfortable sharing before but telling us how they couldn't trust him about the the toxic atmosphere he was creating they used the phrase psychological abuse um telling us they didn't think he was the right person to lead the company to AGI um telling us they had no belief that he you know could or would change no point in giving him feedback no point in trying to work through these issues I mean you know they've since tried to kind of minimize what what they told us but these were not like casual conversations they were they were really serious to the point where they actually sent us screenshots and documentation of some of the the instances they were telling telling us about of him lying and being manipulative in different situations so you know this was a huge deal this was a lot um and we talked it all over very intensively over the course of several weeks and ultimately just came to the conclusion that the best thing for open ey's Mission and for open ey as an organization would be to bring on a different CEO and you know once once we reached that conclusion it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him he you know would pull out all the stops do everything in his power to undermine the board to prevent us from you know even getting to the point of being able to fire him so you know we we were very careful very deliberate about um who we told which was essentially almost no one in advance other than you know obviously our legal team and so that's kind of what took us to to November 17th thank you for sharing that now Sam was eventually reinstated as CEO with most of the staff supporting his return what exactly happened there why was there so much pressure to bring him back yeah yeah this is obviously the the elephant in the room and unfortunately I think there's been a lot of misreporting on this I think there were three big things going on that help make sense of kind of what happened here the first is that really pretty early on the way the situation was being portrayed to people inside the company was you have two options either Sam comes back immediately with no accountability you know totally new board of his choosing or the company will be destroyed and you know those weren't actually the only two options and the the outcome that we eventually landed on was neither of those two options but I get why you know not wanting the company to be destroyed got a lot of people to to fall in line whether because they were in some cases about to make a lot of money from this upcoming tender offer or just because they loved their team they didn't want to lose their job they cared about the work they were doing and of course a lot of people didn't want the company to fall apart you know us us included the the second thing I think it's really important to know that has really gone under reported is how scared people are to go against Sam um they had experienced him retaliating against people retaliating against them for past instances of of being critical um they were really afraid of you know what might happen to them so when some employees started to say you know wait I don't want the company to fall apart like let's bring back Sam it was very hard for those people who had had terrible experiences to actually say that for a fear that you know if if Sam did stay in power as he ultimately did you know that would make their lives miserable and I guess the last thing I would say about this is that this actually isn't a new problem for Sam and if you look at some of the reporting that that has come out since November it's come out that he was actually fired from his previous job at Y combinator which was hushed up at the time and then at you know his job before that which was his only other job in Silicon Valley his startup looped um apparently the management team went to the board there twice and asked the board to fire him for what they called you know deceptive and chaotic Behavior if you actually look at his track record he doesn't you know exactly have a glowing trail of references this wasn't a problem specific to um the personalities on the board as much as he would love to kind of portray it that way so I had to ask you about that but this actually does tie into what we're going to talk about today open AI is an example of a company that started off trying to do good uh but now it's moved on to a for-profit model and really racing to the front of this AI game along with all of these like ethical issues that are raised in the wake of this progress and you could argue that the open AI Saga shows that trying to do good and regulating yourself isn't enough so let's talk about why we need regulations great let's do it so from my perspective AI went from the Sci-Fi thing that seemed far away to something that's pretty much everywhere and Regulators are suddenly trying to catch up but I think for some people it might not be obvious why exactly we need regulations at all like for the average person it might seem like oh we just have these cool new tools like do and chat GPT that do these amazing things what exactly are we worried about in concrete terms there's very basic stuff for very basic forms of the technology like if people are using it to decide who gets a loan to decide who gets parole um you know to decide who gets to buy a house like you need that technology to work well if that technology is going to be discriminatory which AI often is it turns out um you need to make sure that people have recourse they can go back and say hey why was this decision made if we're talking AI being used in the military that's a whole other kettle of fish um and is not I don't know if we would say like regulation for that but certainly need to have guidance rules processes in place and then kind of looking forward and thinking about more advanced AI systems I think there you know there's a pretty wide range of potential harms that we we could well see if AI keeps getting increasingly sophisticated you know letting every little script Kitty in their parents basement having the hacking capabilities of you know a crack NSA cell like that's a problem I think something that really makes it hard for Regulators to to think about is that it is so many different things and plenty of the things don't need regulation like I don't know that how Spotify decides how to make your your playlist the AI that they use for that that like I'm happy for Spotify to just pick whatever songs they want for me and if they get it wrong you know who cares um but for many many other use cases you want to have at least some kind of basic Common Sense guard rails around it I want to talk about a few specific examples that we might want to worry about not in some battle space overseas but at home in our day-to-day lives you know let's talk about surveillance AI has gotten really good at perception essentially understanding the contents of images video and audio and we've got a growing number of surveillance cameras in public and private spaces and now companies are infusing AI into this Fleet essentially breathing intelligence into these otherwise dumb sensors that are almost everywhere Madison Square Garden uh in New York City as an example they've been using facial recognition technology to Bar lawyers involved in lawsuits against their parent company MSG entertainment from attending events at their venue uh this controversial practice obviously raised concerns about privacy due process and potential for abuse of this technology can we talk about why this is problematic yeah I mean I think this is a pretty common thing that comes up in the history of technology is you have some you know some existing thing in society and then technology makes it much faster much cheaper much more widely available like surveillance where it goes from like oh it used to be the case that your neighbor could see you doing something bad and go talk to the police about it you know it's one step up to go to well there's a camera a CCTV camera and the police can go back and check at any time and then another step up to like oh actually it's just running all the time and there's an AI facial recognition detector on there and maybe you know maybe in the future an AI like activity detector that's also flagging you know this looks suspicious um I in some ways there's no like qualitative change in what's happened it's just like you could be seen doing something but I think you do also need to Grapple with the fact that if it's much more ubiquitous much cheaper then then the situation is different I mean I think with surveillance people immediately go to the kind of law enforcement use cases and I think it is really important to figure out what the right trade-offs are between achieving sort of law enforcement objectives and and being able to catch criminals and and you know prevent bad things from happening while also recognizing you know the huge issues that you can get if this technology is used with overreach for example you know facial recognition works better and worse on different demographic groups and so if police are as they have been in some parts of the country going and arresting people purely on a facial recognition match and on no other evidence there's a story about a woman who was eight months pregnant having contractions in a jail cell after having done absolutely nothing wrong and being arrested only on the basis of a you know a bad facial recognition match so I personally don't go for you know the this needs to be totally and no one should ever use it in any way for anything but I think you really need to be looking at how are people using it what happens when it goes wrong what recourse to people have what kind of access to due process do they have and then when it comes to private use I I really think we should probably be be a bit more you know restrictive like I don't know it just seems pretty clearly against I don't know freedom of expression freedom of movement for somewhere like Madison Square Gardens to be kicking their own lawyers out I don't know I'm not a lawyer myself so I I don't know what exactly the um the state of the law around that is but I think I think the sort of civil liberties and um uh privacy concerns there are pretty clear I think the the the problem with sort of an existing set of Technology getting infused with more advanced capability sort of unbeknownst to the common population at large is certainly a trend and one example That Shook Me Up is uh a video went viral recently of a security camera from a coffee shop which showed a view of a cafe full of people in Baristas and basically over the heads of the customers like the amount of time they spent at the Cafe and then over the Baristas was like how many drinks have they made and then you know so what does this mean like ostensibly the business can one track who is staying on their premises for how long learn a lot about customer Behavior without the customer's knowledge or consent and then number two the businesses can track how productive their workers are and could potentially fire let's say less productive Baristas let's talk about the problems in the risk here and like how is this legal I mean the short version is and this comes up again and again and again if you're doing policy um the US has no federal privacy laws like there's no there are no rules on the books for you know how companies can use data the US is pretty unique in terms of how few protections there are of What kinds of personal data are protected in what ways efforts to make laws have just failed over and over and over again but there's now this sudden stealthy new effort that people think might actually have a chance so who knows maybe this problem is on the way to getting solved but at the moment it's it's a big big hole for sure and I think Step One is making people aware of this right because people have to your point heard about online tracking but having those same set of analytics in like the physical space in reality it just feels like the Rubicon has been crossed and we don't really even know that's what's happening when we walk into whatever grocery store I mean again yeah and again it's it's about sort of the the scale and the ubiquity of of this uh because again it could be like your favorite um Barista knows that you always come in and you sit there for a few hours on your laptop because they've seen you do that a few weeks in a row that's very different to this this data is being collected systematically and then sold to you know data vendors all around the country and used for all kinds of other things or outside the country um so again I I think we have these sort of intuitions based on our real world personto person interactions that really just break down when it comes to sort of the size of data that we're talking about here I also want to talk about scams so folks are being targeted by phone scams they get a call from their loved ones it sounds like their family members have been kidnapped and being held for ransom in reality some bad actor just used off the-shelf to Shrub their social media feeds for these folks voices and scammers can then use this to make these very believable hoax call um where people sound like they're in distress and being held captive somewhere so we have reporting on this particular hoax now but what's on the horizon what's like keeping you up at night I mean I think that the the obvious Next Step would be with video as well I mean definitely if you haven't already gone and talked to you know your parents your grandparents anyone in your life who is uh not super techsavvy and told them like you need to be on the lookout for for this you should you should go do that I talk a lot about kind of policy and what kind of government involvement or regulation we might need for AI I do think a lot of things we can just adapt to and we don't necessarily need new rules for so I think you know we've been through a lot of different waves of online scams and I I think this is the newest one and it it really sucks for the people who get targeted by it but I also expect that you know five years from now it'll be something that people are pretty familiar with and and will be a pretty small number of people who are still vulnerable to it so I think the main thing is yeah be super suspicious of any any voice definitely don't use voice recognition for like your bank accounts or things like that I'm pretty sure some banks still offer that you know ditch that um definitely something more secure and yeah be on the lookout for for video scamming as well and for people you know um on video calls who look real I think there was recently a just the other day um a case of a guy who was on a whole conference call where there were a bunch of different AI generated people all on the call and he was the only real person got scammed out a bunch of money um so that that's coming totally content based authentication is on its last l it seems definitely it's always worth like checking in with what is the Baseline that we're starting with and I mean so for instance a lot of things um a lot of things are already public and they don't seem to get misused so like I think I think a lot of people's addresses are listed publicly you know we used to have literal you know White Pages where you can look up someone's address um and that mostly didn't result in you know in terrible things happening or you know I even think of silly examples like like I think it's really nice that uh delivery drivers or when you go to a restaurant to pick up food that you ordered it's just there all right so let's talk about what we can actually do it's one thing to regulate businesses like cafes and restaurants it's another thing to re in all the Bad actors that could have used this technology can laws and regulations actually protect us yeah they definitely can I mean and they already are again AI is so many different things that there's no one set of AI regulations there's plenty of laws and regulations that already apply to AI so um there's a lot of concern about AI you know algorithmic discrimination um with good reason but in a lot of cases there are already laws on the books you know you can't discriminate on the basis of race or gender or sexuality or whatever it might be um and so in those cases it's not even you don't even need to pass new laws or make new regulations you you just need to make sure that the agencies in question have you know the Staffing they need um maybe they have the the maybe they need to have the exact authorities they have tweaked in terms of who are they allowed to investigate or who are they allowed to penalize or things like that there are all rules for things like self-driving cars you know the Department of Transportation is is handling that it makes sense for them to handle that for AI and banking there's a bunch of banking Regulators that have a bunch of rules um so I think there's a lot of places where you know AI actually isn't fundamentally new and the existing systems that we have in place are are doing an okay job at at handling that though they may need again more staff or slight changes to to what they can do and I think there are a few different places where there are kind of new challenges emerging at sort of The Cutting Edge of AI where you have systems that can really do things that that computers have never been able to do before and whether there should be rules around making sure that those systems are being kind of developed and deployed responsibly I'm particularly curious if there's something that you've come across that's really clever or like a model for what good regulation looks like I think this is mostly still a work in progress so I don't know that I've seen anything that I think really absolutely Nails it I think a lot of the challenge that we have with AI right now relates to how much uncertainty there is about what the technology can do what it's going to be able to do in 5 years you know experts disagree enormously about those questions which makes it hard to make policy so a lot of the policies that I'm most excited about are about shedding light on those kind of questions giving us a better understanding of where the technology is so some examples um of that are things like uh President Biden created this big executive order last October and had all kinds of things in there one example was a requirement that companies that are training especially Advanced systems have to report certain information about those systems to the government and so that's a requirement where you're not saying can't build that model can't train that model um you're not saying the government has to approve something you're really just sharing information and creating kind of more awareness and more ability to respond as the technology changes over time which is you know such a challenge for government keeping up with this fast-moving technology there's also been a lot of good movement towards funding like the science of measuring and evaluating AI a huge part of the challenge with figuring out what's happening with AI is that we're really bad at actually just measuring how good is this AI system how you know how do these two AI systems compare to each other is one of them sort of quote unquote smarter so I think there's been a lot of attention over the last year or two into funding and and establishing within government um better capabilities on that front I think that's that's really productive okay so policy makers are definitely aware of AI if they weren't before and plenty of people are worried about it U they want to make sure it's safe right uh but that's not necessarily easy to do and you've talked about this how it's hard to regulate AI so why is that what makes it so hard yeah I think there's there's at least three things that make it very hard one thing is AI is so many different things like we've talked about um it's cuts across sector you know has so many different use cases it's really hard to get your arms around you know what it is what it can do what impacts it will have a second thing is it's a moving Target so what the technology can do is different now than it was even two years ago let alone 5 years ago 10 years ago um and you know policy makers are not good at sort of agile policymaking um they're not like software developers and then the third thing is no one can agree on how they're changing or how they're going to change in the future if you ask five experts you know where the technology is going you will get five completely different answers often five very confident completely different answers so that makes it really difficult for policy makers as well because they need to get scientific consensus and just like take that and run with it so I think maybe this this kind of third factor is the one that I think is the biggest challenge for making policy for AI which is that for policy makers it's very hard for them to tell who should they listen to what problem should they be worried about um and how is that going to change over time speaking of who you should listen to obviously you know the very large companies in the space have an incentive and there's been a lot of talk about regulatory capture when you ask for transparency why would companies give a peak under the hood of what they're building they just cite this to be proprietary on the other hand you know they might be uh these companies might want to set up you know policy and institutional framework that is actually beneficial for them and sort of prevents any future competition how do you get these powerful companies to like participate and play nice yeah it's definitely very challenging for policy makers to figure out how to interact with those companies again because you know in part because they're lacking the expertise and the time to really dig into things in depth themselves like a typical Senate staffer um might cover like you know technology issues and trade issues and Veterans Affairs and Agriculture and education you know and that's like their portfolio um so they are scrambling like they have to they need outside help so I think it's very natural that the companies do come in and play a role and I also think there are plenty of ways that policy makers can really mess things up if they don't you know know how the technology works and they're not talking to the companies that regulating about what's going to happen the challenge of course is how do you balance that with external voices who are going to point out the places where the companies are actually being self- serving and so I think that's where it's really important that Civil Society has resources to also be in these conversations certainly what we try to do at CET the organization I work at we're totally independent and you know really just trying to work in the best interest of you know making good policy the big companies obviously do need to have a seat at the table but you would hope that they have you know a seat at the table and not 99 seats out of 100 in terms of who policy makers are are talking to and listening to there also seems to be a challenge with enforcement right uh you've got all these AI models already out there a lot of them are open- Source you can't really put that Genie back in the bottle nor can you really start you know moderating how this technolog is used without I don't know like going full 1984 and having a process on every single computer monitoring what they're doing uh so H how do we how do we deal with this landscape where you do have you know closed source and open source like various ways to access and build upon this technology yeah I mean I think there are a lot of intermediate things between total Anarchy and full 1984 um there's things like um you know hugging face for example is a very popular platform for open source AI models so hugging phas in the past has has delisted models that are you know considered to be offensive or dangerous or or whatever it might be and that actually does meaningfully reduce kind of the usage of those models because hugging Face's whole deal is to make them more accessible easier to use easier to find uh you know depending on the specific problem we're talking about there are things that for example uh you know social media platforms can do so if we're talking about um as you said child pornography or um also you know political disinformation things like that maybe you can't control that at the point of creation but if you have the the Facebooks the Instagrams um of the world uh you know working on they they already have methods in place for how to kind of detect that material suppress it report it um and so you know there there are other mechanisms that you can use and then of course specifically on the kind of image and audio generation side there are some really interesting initiatives underway mostly being led by industry around what gets called content Providence or content authentication which is basically how do you know where this piece of content came from how do you know if it's real and that's a very rapidly evolving space and a lot of interesting stuff happening there I think there's there's a good amount of promise not for Perfect Solutions where we always know is this real or is it fake but for making it easier for individuals and platforms to recognize okay this is this is fake it was AI generated by this particular model or this is real it was taken on this kind of camera and we have the cryptographic signature for that I don't think we'll ever have Perfect Solutions and again I think you know societal adaptation is just going to be a big part of the story but I do think there's there's pretty interesting Technical and policy um options that that can make a difference definitely and even if you can't completely control you know the generation of this material there are ways to drastically cap the distribution of it and so like I I think that reduces some of the harms there yeah at the same time labeling content that is synthetically generated a bunch of platforms have started doing that that's exciting because like I don't think the average consumer should be a def fake detection expert right but really like if there could be a technology solution to this that feels a lot more exciting um which brings me to the Future I'm kind of curious in your mind what's like the dystopian scenario and the utopian scenario in all of this let's start with a dystopian one what does a world look like with inadequate or bad regulations paint a picture for us so many possibilities um I mean I think I think there are worlds that are not that different from now where you just have automated systems doing a lot of things uh playing a lot of important roles in society in some cases doing them badly and people not having the ability to to go in and question those decisions there's obviously this whole discourse around existential risk from AI etc etc KLA Harris had a whole speech about like you know if someone's I forget the exact examples but if someone loses access to Medicare because of an algorithmic issue like is that not existential for that you know an an elderly person um you know so so there are already people who are being directly impacted by algorithmic systems and AI in really serious ways even you know some of the reporting we've seen over the last couple months of how AI is being used in Warfare like you know videos of a drone chasing a Russian soldier around a tank and then shooting him like I don't think we're a full dystopia but there's there's sort of plenty of plenty of things we worried about already something I I think I worry about quite a bit or that feels intuitively to me to be a particularly plausible way things could go is sort of what I think of as the um the Wall-E future I don't know if you remember that movie oh absolutely with the little robot and the piece that I'm talking about is not the like junk Earth and whatever the piece I'm talking about is the people in that movie they just sit in their soft roll around wheelchairs all day and you know have content and um uh content and food and whatever to keep them happy and I think what worries me about that is I do think there's a really natural gradient to go to towards what people want in the moment and will you know will go will choose in the moment which is different from what they you know will really find fulfilling or what will build kind of a meaningful life and and I think there's just really natural commercial incentives to build things that people sort of superficially want but then end up with this really kind of meaningless shallow superficial world and potentially one where kind of most of the consequential decisions are being made by machines that have no concept of what it means to lead a meaningful life and you know because how would we program that into them because we have no we we struggle to kind of put our finger on it ourselves so I think those kinds of Futures where not where there's some you know dramatic big uh event but just where we kind of gradually hand over more and more control of the future to computers that are more and more sophisticated but that don't really have any concept of meaning or beauty or Joy or fulfillment or you know flourishing or or whatever it might be um I I hope we don't go down those paths but it it definitely seems possible that we will um they can play to our hopes wishes anxieties worries all of that just give us like the junk food all the time whether that's like in terms of nutrition or in terms of just like audiovisual content and that could certainly end badly uh let's talk about the opposite of that the utopian scenario what does a world look like where we've got this perfect balance of innovation and Regulation and Society is thriving I mean I think a a very basic place to start is can we solve some of the big problems in the world and I do think that AI could help with those so can we have a world um without climate change a world with much more abundant energy that is much more you know cheaper and therefore more people can have more access to it um where you know we have better agriculture so there's greater access to food and beyond that you know I think I think what I'm more interested in is is setting you know our our kids and our grandkids and our great grandkids up to be to be deciding for themselves what what they want the future to look like from there um rather than having kind of some particular vision of of where it should go um but I I I absolutely think that that AI has the potential to really contribute to solving some of the biggest problems that we kind of face as a civilization it's hard to say that sentence without sounding kind of grandiose and you know trite but um but I think it's true so maybe they close things out just like what can we do you you mentioned some examples of being aware of synthetically generated content what can we as individuals do when we encounter use or even discuss AI any recommendations I think my biggest suggestion here is just not to be intimidated by the technology and not to be intimidated by technologists like this is really a technology where we don't know what we're doing the best experts in the world don't understand how it works and so I think just you know if you find it interesting being interested if you think of fun ways to use it use them um if you're worried about it feel free to be worried like you know I think the main thing is just feeling like you have a right to your own take on what you want to happen with the te ology and and no regulator no you know CEO is ever going to have full visibility into all of the different ways that it's affecting you know millions and billions of people around the world and so kind of I don't know trusting your own experience and and exploring for yourself and seeing what you think is is I think the main main suggestion I would have it was a pleasure having you on Helen uh thank you for coming on the show thanks so much this is fun so maybe I bought into the story that played out on the news and on X but I went into that interview expecting Helen toner to be more of an AI policy maximalist you know the more laws the better uh which wasn't at all the person I found her to be Helen is a place for rules a place for techno optimism and a place for society to just roll with adapting to the changes as they come for balance policy doesn't have to mean being heavy-handed and hamstringing Innovation it can just be a check against perverse economic incentives that are really not good for society and I think you'll agree but how do you get good rules a lot of people in Tech are going to say you don't know they know the technology the best the pitfalls not the lawmakers and Helen talked about the average Washington staffer who isn't an expert doesn't even have the time to become an expert and yet it's on them to craft regulations that govern AI for the benefit of all of us technologists have the expertise but they've also got that profit motive their interests aren't always going to be the same as the rest of ours you know in Tech you'll hear a lot of Regulation bad don't engage with regulators and I get the distrust sometimes Regulators do not know what they're doing India recently put out an advisory saying every AI model deployed in India first had to be approved by Regulators totally unrealistic there was a huge backlash there and they've since reversed that decision but not engaging with government is only going to give us more bad laws so we got to start talking if only to avoid that Wall-E dystopia okay before we sign off for today I want to turn your attention back to the top of our episode I told you we were going to reach out to Sam Alman for comments so a couple of hours ago we shared a transcript of this recording with Sam and invited him to respond we've just received a response from Brett Taylor chair of the opening ey board and here's the statement in F quote we are disappointed that Miss toner continues to revisit these issues an independent Committee of the board worked with a law firm Wilmer hail to conduct an extensive review of the events of November the review concluded that the prior board's decision was not based on concerns regarding product safety or security the pace of development open eyes finances or its statements to investors customers or business partners additionally over 95% of employees including senior leadership asked for Sam's statement of CEO and the resignation of the prior board our Focus remains on moving forward and pursuing open ai's mission to ensure AGI benefits all of humanity end quote we'll keep you posted if anything unfolds the tedi show is a part of the Ted audio Collective and is produced by Ted with Cosmic standard our producers are Ella feder and Sarah McCrae our editors are B bhang and Alejandra Salazar our showrunner is Ivana Tucker and our associate producer is Ben Montoya our engineer is Asia polar Simpson our technical director is Jacob Winn and our executive producer is Eliza Smith our fact Checkers are Julia Dickerson and Dan kalachi and I'm your host baval sadu see y'all in the next one [Music]

Related conversations

AXRP

15 Feb 2026

Guive Assadi on AI Property Rights

This conversation examines governance through Guive Assadi on AI Property Rights, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -1 · 136 segs

AXRP

28 Jun 2025

Peter Salib on AI Rights for Human Safety

This conversation examines governance through Peter Salib on AI Rights for Human Safety, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 196 segs

AXRP

27 Nov 2023

AI Governance with Elizabeth Seger

This conversation examines governance through AI Governance with Elizabeth Seger, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -7 · avg -8 · 110 segs

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.