Is AI just all hype? w/ Gary Marcus
Why this matters
This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.
Summary
This conversation examines core safety through Is AI just all hype? w/ Gary Marcus, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 39 full-transcript segments: median 0 · mean -0 · spread -10–8 (p10–p90 0–0) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.
- - Emphasizes alignment
- - Emphasizes safety
- - Full transcript scored in 39 sequential slices (median slice 0).
Editor note
Useful mainstream bridge episode for teams that need a shared baseline quickly.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video 8Sh3og8p-u4 · stored Apr 2, 2026 · 1,128 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/is-ai-just-all-hype-w-gary-marcus.json when you have a listen-based summary.
Show full transcript
artificial intelligence has been around at least since the early 1950s when researchers at the University of Manchester made a machine that could play checkers and a machine that could play chess but at the end of 2022 70 years later open AI casually released an early demo of chat GPT and suddenly AI was all anyone could talk about even if you were steeped in the tech industry it seemed to explode out of nowhere like a Sci-Fi Story come to life high schoolers had a whole new method for cheating on homework coders had a new automated tool that could debug code across dozens of programming languages users watched in amazement as the blinking cursors unspooled entire stories or screenplays in a matter of seconds and ever since that first release AI has stayed at the top of the news cycle how to use AI for family time BBC panic and possibility what workers learned about AI in 2023 Google pitches media outlets on AI that could help produce news AI generated garbage is polluting our culture mainstream media covered the emerging capabilities of AI and The Uncanny improvements but even more than that it covered the reactions highly prominent Tech leaders and Scholars even celebrities had all sorts of grandiose statements to make about ai's future and with all of these predictions hitting social media we started to hear from two polarized camps there is the Techno Optimus who claimed that AI is going to solve major Global issues and Usher in a world of radical abundance Mark andrees who wrote the Techno Optimus Manifesto wrote that AI is quite possibly the most important and best thing our civilization has ever created certainly on par with electricity and microchips and probably Beyond those for the so-called techno Optimist soon enough we'll all be living in a world with boundless creativity and invention [Music] and then there are the pessimists the people who believe we're going to create a super intelligence which could take off without us and pursue arbitrary goals that could destroy human civilization these folks toss around terms like P Doom the probability that AI will annihilate everyone on the planet some of them self-identify as doomers like eleazer owski who proposes that a super intelligent AI could Commander weapon systems against humans if it comes to see us as a threat the pessimists are picturing dystopia urging us to slow down or stop development alt together roadmapping catastrophic [Music] outcomes I want to show you something so please bear with me hey chat GPT my producer Sarah and her dog are on one side of the river they want to cross the river there is a boat available what steps do they need to follow write the solution in bullet points and be as concise as possible Sarah gets into the boat and rows to the other side of the river what about the dog Sarah leaves the boat on the other side and walks back to the original side Sarah puts her dog in the boat and rows with the dog to the other side of the river what why not do that Sarah and her dog disembark from the boat on the other side wow okay this is the technology that has the power to save or destroy Humanity sometimes it seems like all the hype around AI is just that hype I'm belaval sadu and this is the Ted AI show where we figure out how to live and thrive in a world where AI is changing everything as we approach the 250th anniversary of the Declaration of Independence Ted is traveling to the birthplace of American democracy Philadelphia for an exciting new initiative together Ted and VIs Philadelphia are exploring Democratic ideas in a series of three Fireside Chats that will shape our Collective future as we work towards a more perfect union our first event about living in a democracy will take place on July 9th at the national Constitution Center hosted by New York Times bestselling author AJ Jacobs will feature TED talks and a moderated Q&A with Katherine Maher president and CEO of national public radio and former CEO of the Wikipedia foundation and baratunde Thon Emmy nominated host producer writer and public speaker best known for his work on The Daily Show and the host of the how to Citizen podcast we'll return to Philadelphia in September and November for our final two Fireside Chats of the year thanks to visit Philadelphia and our supporting Partners Bank of America Comcast NBC Universal and High Mark want to learn more go to visit.com nfts gpus grocking capacitance the tech world is full of a lot of lingo keep up with the latest acronyms and technology news with Ted's new newsletter TED Talks Tech will bring you Tech headlines talks podcasts and more on a bi-weekly basis so you can easily keep up with all things Tech and AI subscribe now at the link in our show notes when it comes to Hype Gary Marcus is one of the main people telling us to tone it down Gary self-identifies as an AI skeptic and that's really what he's known for in the AI industry he doesn't think that AI is going to save us or make us go extinct and he doesn't think that the large language models that have captivated the public imagination over the past few years are anywhere close to intelligent nor does he think that they will be if we continue down this training path but he does think that the hype around generative AI has real consequences and that there are concrete harms we should take into account when we think about what kind of AI future we want Gary who was a professor of Psychology and Neuroscience at NYU is known for his sharp critiques of generative AI models and even though a number of tech people portray him as someone who just hates AI he actually co-founded his own machine Learning Company which was acquired by Uber in 2016 so just because he's skeptical of the current path AI is on doesn't mean he thinks we should avoid building Advanced AI systems altogether his latest book will be out this fall and is titled taming silicon Valley how we can ensure that AI works for [Music] us so Gary there has been so much hype about Ai and it's funny because when you go talk to you know let's say non-technical knowledge workers they have this perception of what AI can do when they read the news or they go on Twitter and then they go use the tools and they're like holy crap this kind of sucks and I think it's almost frowned upon to discuss the limitation of the technology oh un hated like I think people have me on their dart boards and stuff you were often portrayed as someone who's an AI naysayer but you're actually someone who's devoted your life to AI so I want to go back to the beginning and ask you when you were first getting started what was it about AI that excited you I I've been interested since I was a kid I learned a program on a paper computer when I was eight years old and I very quickly started thinking like how do we get it to play games I was got interested in game AI and something not everyone will know about me is I didn't finish high school and the reason I was able to not finish High School is because I had written a Latin to English translator on a Commodore 64 and that allowed me to skip the last couple of years of high schools I was already thinking about AI in my spare time it wasn't a school project just always passionate about it I've had multiple Cycles I should say also of disillusionment about AI so you know it's possible to love something and be disappointed in I think a lot of parents of teenagers probably feel that way I kind of feel like AI is like a recal teenager right now like you want the best for it but you can see right now it's struggling with it's decisionmaking and it's reasoning and you know I really do want AI to succeed and I'm glad that you picked up on that because a lot of people kind of misrepresent me as you know just as you say as an AI naysayer I am a generative AI naysayer I just don't think it's going to work I think there are a lot of risks associated with it and I don't like the business decisions that people are making and for that matter I don't like decisions the government is making by not making decisions um there's a lot of things I don't like right now but I am at some level almost an accelerationist there's so many problems that I don't think we can solve on our own or even with calculators even with supercomputers and so like I would love for AI to come to the rescue in a lot of things and so I believe in the vision and when I hear Sam Altman say like AI is going to do all this stuff I'm with him it's only when he says well we're on a trajectory right now to AGI that I like kind of roll my eyes and I'm like no are you kidding me there's so many problems we need to solve before we have an AI that is sophisticated enough to behave as an actual scientist I mean so you introduced a couple of interesting terms obviously AGI which is a hotly debated term but I think you had a very pithy uh definition at least a working definition what the heck is Agi this thing that seemingly every major AI lab is moving towards how do you define it AGI is supposed to be artificial general intelligence and what they were thinking about is basically intelligence with the flexibility and generality of human intelligence and part of the contrast there is AI has had a long history of Building Solutions that are very narrow and tightly fit to particular problems so like a chess computer that can play chess and do nothing else is a classic example of narrow Ai and I think a lot of us in the field for a long time have been like that's not really what we mean by intelligence it's useful it's more like a calculator that's built for that purpose and then chat GPT came along and it kind of disrupted the usual ways that we talk about about this so in some ways it's not a narrow AI you can talk to it about anything but it's not really what we meant by artificial general intelligence it's not actually that smart you know whether or not it gets something right depends very heavily on what exactly is in the training set and how close is your problem and you can't really count on it because you never know when your particular problem is going to be subtly different from one that it was trained on and so you can't count on it that's not really intelligent so you have the generality but it's it's almost like an approximation of intelligence not really intelligence and I think it's further away than most people say and if you ask me when AGI is going to come I'll say very conservatively it could come in 2030 because like everybody's working on it there's billions of dollars being spent and what do I know but if you ask me like from my perspective as a cognitive scientist what do we have solved and what don't we have solved where have we made progress and not we've made tremendous progress on mimickry and very little progress on planning on reasoning those are problems we've been thinking about in the field of AI for 75 years and progress on those has really been limited so Gary you say progress has been limited but what about achievements like alphao I mean that program's ability to strategize in games like go or chess is far superior to humans like these systems are making moves that humans would never even think of making chess is exponential go is exponential reasoning and planning and open-ended real world problems where you can't crush it by having infinite simul data the way you can go we have not made progress on those problems we have this approximator that's 80% correct 80% correct is great for some problems but if you want to do medicine if you want to do guide domestic robots things like that 80% isn't really good enough what happened with driverless cars is we got to 80% correct real fast and then there've been subtle tiny improvements you know every year or whatever but we're not anywhere close to a general purpose level five self-driving car you can plop it down in a city it's never been and drive as well as I could drop an Uber driver in that city which is not close to it so you know why has driving been so hard even though we have you know millions or billions I guess now of training hours and it might be because there there's this immense periphery of outliers which is really the the issue for a lot of these things um you know cases were just not in anybody's training set and my favorite example of this is a Tesla somebody pressed summon you probably saw this video and it ran straight you even I don't into a jet you knew where I was going right um the worst summon example of all time um like no human would have made that mistake now there's a separate question like eventually driverless cars might still make mistakes that humans don't but be safer but but for now they're not safer than in humans and they do make these bizarre errors that tell you something about their operation and tell you something about why the problem's been hard to solve the reason the problem's been hard to solve is there are outlier cases there's a it till you make it culture in Silicon Valley I think and the reality is I don't think we're close to AGI I'm not so sure we're even close to driving with cars it is interesting though you're talking about mimicry right these systems are very good at it whether it's mimicking how we write text or produce images that mimic aspect gives you the illusion of understanding right that's where it gets a little Insidious and people start trusting these systems more than they should they make really bizarre errors I gave the example of Galactica predicting or say saying that that Elon Musk died in a car accident and the the exact sentence was something like on March 18th of 2018 Elon Musk was involved in a fatal car collision well in a classical AI system if you assert something you look it up in a database or you can look it up in a database to see if it's true you can't do that in these llms that's just not what they do they don't they don't even know that they're asserting anything there's an enormous amount of data that indicates that Elon Musk is in fact still alive right so the notion that he died in 2018 is an absolute non-starter the evidence weighs heavily against that and any credible artificial general intelligence should be able to among other things accumulate available evidence like if you can't do that what are you doing so what happened in the particular example is the on March 18th you know some people died in car accidents and some of them own Tesla but it doesn't you know distinguish between Elon Musk owning Tesla the company versus an individual owning a Tesla car and so forth so you get the approximate feel of language but without the ability to do any factchecking whatsoever I want to just double click on this like language part right like when we put together language as humans you know it's common for us to perhaps misspeak we say the wrong word you know if we don't have all the information about something we probably say inaccurate stuff certainly social media is littered with inaccuracies either intentional or unintentional and so in your understanding how is this different from what happens when these generative AI models hallucinates and I I know that's a term that you have perhaps popularized as well popularized with regret popularized with regret the problem with it before I get to the main part of your question is that it implies a kind of intentionality um or implies a Humanity it's a little bit too anthropomorphic you have to think about these things like a cognitive psychologist does and the first rule of cognitive psychology is any given Behavior can be caused in different Ways by different internal Machinery different internal mechanisms so you know humans make mistakes all the time and say things that are untrue all the time one of the reasons they say things that are untrue is that are lying humans lie all the time they lie to fool other people they lie to fool themselves and so forth that's never the explanation um for an llm hallucination they have no intent so you know right from the jump you know that the explanation is different I mean in fact whether it says something that's true or false it's using the same mechanism it's hopping through the Clusters and and sometimes it hops and lands in a good place CU it has pretty good data about the world indirectly through language and maybe through vision so sometimes it lands on the right place sometimes the wrong place it's it's the same mechanism if you lie you're actually doing some second order processing thinking is's anybody didn't catch me what's my body language whatever so those are very different mechanisms there are other reasons people um make mistakes too like their memories are bad um and you could argue there's some of that going on in in an llm with the liness and so forth so there might be a little bit of overlap sometimes we try to reconstruct something we know we don't really have it right we try to make our best guess making best guess is also a little bit different so they're just not the same um underlying mechanisms for a hallucination you have to look at the underlying causal mechanism and it just happens that the reason that LMS are so prone to this is they cannot do factchecking and that's just what they do so yeah like strolling through Laten space may not be the equivalent of like AGI certainly uh it's it's convenient at times but like you said it's 80% as good it might even be useful like I mean it might be part of AGI I don't want to say that like we should just throw all the stuff away altogether but it is not AGI like it could be a subset of a AGI I always like to think of these as sets of tools so like totally if somebody came to me and said I have a power screwdriver I can build houses now I'd say it's great that you have a power screwdriver it's a fabulous invention but you still need a hammer and a plane and a level and you know Blueprints and so forth you need a Federation of approaches that concert will this like grander problem exactly I like the word in fact or funny you said that I like the word orchestration if you look at Neuroscience the brain Imaging literature the one thing I think it really tells us like I think it very expensive and didn't tell us that much but the one thing it really told us is you can take somebody you teach them a new task you put them in a brain scanner and they will online on the spot figure out a new way of orchestrating the pieces of the brain that they already have in order to solve that new task and then in fact what's even more amazing take another 20 people and they'll probably mostly do the same thing so there's some systematic stuff that the human brain does to orchestrate its capacities it's amazing and we don't know how the brain does that um I don't know if AI has to do the same thing but for humans you know that orchestration is like very fundamental to what artificial general or sorry natural general intelligence is about is is putting together the right tools for the right job at the right moment to bring things back to a TED Talk from this year Mustafa salaman of Microsoft has said um that AI hallucinations can and will be cured by as early as next year and many like him in the industry you know who are sort of the leaders of the current AI wave have been saying this too what are they getting wrong so um I don't know if they actually believe it or not I think if they believe it they're naive um what they're missing is these systems inherently don't do factchecking they don't do validation they I like your phrase they stroll through Leighton space and that that's just not a solution to the hallucination problem I mean imagine you had a newspaper and you had a bunch of journalists some of whom were actually on acid and they file their stories what are you going to do like have them write more stories no you need a fact Checker you need someone whose skill is to like Trace things down that's just not what these systems do like if you actually understand the technical side of it it just seems like an absurd claim so yeah clearly you don't think more data and more comput is the solution um I don't think it's the answer there's always going to be outliers and it's just not the right technology for it so um I think my most important work which wild wildly underappreciated at the time was in 1998 I talked about what I called a training space problem I looked at earlier neural networks and showed that they didn't generalize essentially beyond the distribution so now we act it's a it's become a generally known truth that there's a problem with distribution shift and that's why the driverless car thing like end end deep learning for driverless cars doesn't work because the distribution shifts you have a distribution of data to break that out um that you're trained on and now some of the test is outside that distribution like when someone hits someone and there there's an airplane there so you wind up going out of the distribution that is a problem for this class of models that people have been focusing on for a quarter Century it was a problem in 1998 it has not been solved there's no conceptual idea out there where somebody can say well I I can explain to you why this has been going on um and here's how I'm going to fix it someday someone will do that in fact I think but right now it's just like mysticism I'll add more data and it'll go away and then there's one other way to look at the question which is to look at the actual data so everybody's like it's an exponential I'm like why don't you plot the data so we don't have all the data we need to do this perfectly but what you should do is you should plot GPT to gpt2 to gpt3 to GPT 4 we don't really have GPT 5 um and fit the curve and tell me what the exponent is okay and the problem is whatever curve fitting you do for that you should have seen progress from GPT 4 which is now 13 months ago 14 months ago to now and we don't right all everything has topped out at gp4 if you do the statistical procedures you should do to plot curves you have to say that it is slowed down you know we should be further ahead right now if you believe the the earlier Trend that earlier trend is just not holding anymore and I think to your point a bunch of leaders like Demis aabis has also come out on record and saying like yeah maybe like you know just scaling data and compute may not be the answer we may need some fundamental breakthroughs which was the point by the way of my most maligned paper um deep learning is hitting a wall we've see these empirical generalizations the so-called scaling laws but they're not laws they're not laws like physics right you cannot say optionally I won't follow gravity today right um but the scaling laws were just generalizations for a period in time um which would be like saying you know the stock market Rose for the last six years that doesn't mean it's going to rise again next year but still there's a lot of conversation about AI risk right like AI has certainly entered the public Consciousness and when we start thinking about AI risk we hear all about the risks from AI working too well right this is the like AGI has been achieved internally scenario but from what I've heard you say it sounds like you believe the bigger risk actually comes from the gap between AI appearing to have human cognition when it actually doesn't have it so in your view what are those concrete harms that we need to be most concerned about a lot of the discussion is about risks of super smart AGI that may or may not materialize and and certainly I think needs attention um but there are a lot of risks now I think the single biggest risk right now of of AI of generative AI is that it can easily be misused and it is being misused so people are using it and also deep fake Technologies and things like that um to make propaganda there's also you know people are using it for fishing Expeditions other deep fake tools are being used for non-consensual deep fake porn and and so forth um so there's that whole class of things and then there's a whole class I would say they're more about reliability which is like some people have been sold a bill of goods about how strong this technology is and they want to put it everywhere like in Weapons Systems and electrical grids and cars so there there are a bunch of problems that come from over trusting systems that are not very sophisticated so there there different sets of problems right now but I think that they are serious and I think um it's worth asking oneself every day because this is a very much evolving Dynamic thing are the net advantages of llms greater than the net costs and I honestly don't know the answer even if it is true as I have argued that we're kind of reaching a plateau with what we can do with LMS we're not reaching a plateau with our understanding of what we can do with them so there is still going to be both positive and negative use cases that nobody's discovered even if there is no capability Improvement let's say for another five years or whatever so yeah I think what you're saying what you're saying is also like despite the gap between you know the expectations and reality humans wielding these models despite all the imperfections can still do a lot of good and bad right that's right and there might be an advantage to the malicious actors because the the good actors mostly want their systems to be reliable and the Bad actors care less so think about like spam like um you know spammers send out a million pieces of spam a day and they only need one of them to hit so there's some asymmetry there that that may actually shift things but that's speculative I'm not certain of that another one of the eth debates that has been dominant in this time of large language models is the copyright issue and the fact that these models are mimicking data that was created by real people whether it be writers artists or just people on the internet do you think there's any plausible World in which you know like creators are actually renumerated for their data it has to be like that I mean look at what happened with Napster for minute everybody was like information wants to be free I love getting all this free music and Court said no it doesn't work that way people have copyrights here and so we move to streaming and streaming is not perfect but we moved to licensed streaming that's what we did we went to licensing and that's what we're going to do here like either the courts will force it or congress will force it or both there was an amazing letter from the House of Lords I just saw um in the UK saying basically the if the governments don't act here we're going to normalize this thing where people are stealing left and right from artists and that's not coal I I have this book coming out taming Silicon Valley one of the things I say at the end is we should actually boycott stuff that isn't properly licensed because if we say oh it's fine that our friends who are artists get screwed we're next for the people listening to this right that are either overly optimistic or perhaps like overly skeptical but what's happening in AI like what is the right measured way to look at the Innovations taking place what advice do you have for people that want to sort of combat this AI hype cycle like how should media be approaching coverage of AI and then how should consumers and decision makers are making purchase decisions in the world be skeptical about consuming that AI news I think the consumers have to realize that it's in the interest of the companies to make AI sound more imminent than it does and you wind up with things like the Humane pin that are really exciting is they sound like they're going to you know do everything but your laundry for you and then they don't I think journalists need to get in the habit of asking Skeptics and that's includes me but there are many others out there saying you know does this thing actually work if if Sam says we're on a trajectory to AGI like ask 10 professors and see what they think of it and you know probably nine of them are going to say that's ridiculous this is not AGI we're actually very far away and that needs to be factored in like just reporting the press releases from the companies is usually wrong because the companies you know have an interest in stock price or valuation or whatever they're obviously going to have an optimistic bias um you want to counter that I I see journalists do that all the time in politics but I don't see it as much in in AI coverage like Skeptics you know once in a while they get a line and I mean I I shouldn't complain I've had a lot of you know media lately but I still think on the whole that there is a very strong bias to report essentially press releases from the companies saying you know we're close um to solving this thing and like we've been getting that for you know driverless cars for 10 years and the cars still aren't here like how many times do we have to see this movie before we realize that promises are cheap and until something has gone from a demo to actual production that everybody can use that you've tried out you should be a little bit skeptical so it does seem generative AI here right we cannot put the genie back in the bottle as is uh you know perhaps stated in every single panel that I've seen about AI um what kind of restrictions and regulations should we be introducing around deployment you have a book coming out in September called taming Silicon Valley give us the tldr it is a big challenge I can't do it all in one sentence one of the arguments of the book is that AI is kind of like a Hydra right now especially generative AI like there new things popping up every day um we need a multi-prong approach we need to have predeployment testing for something that's going to be for 100 million people so I don't think we need to um restrict research at the moment but as things get rolled out to large numbers of people same as you do with medicine or car or food or anything else um if you don't have the prior practice to say that this is safe then people should be required and what's crazy about gp4 is they wrote this long paper explaining many risks giving no mitigation strategies essentially for any of them and they just said good luck world and they didn't reveal what data they were trained on which would really help us in trying to mitigate the risk and so it's like good luck World we're not going to help you there there's this quote that Meen Dow had of uh Greg Brockman some years ago saying we're not just going to throw things over the fence and leave them you know to the world to figure out but that's exactly what they're doing right now and we cannot trust the Silicon Valley to self regulate like look what's happened with social media they've just basically fallen down on that job and and it's been really um bad for teenage girls it's really been bad for the polarization of society and so forth so um having some predeployment testing is important having auditing is important the government has to back up independent researchers and say yes we're going to make it easy for you to do the analysis to see if this stuff is safe you gave the example of a well-regulated industry um you know such as Aerospace and obviously airlin line and air air travel is largely safe but we still have an incident like the Boeing one right so is regulation the answer well I mean we would have a lot more incidents if we didn't have regulation imagine if we didn't have regulation then you know commercial airline travel like in the 1940s like lots of people died like routinely I don't think regulation is easy and I think we have to take an iterative approach to regulation just like we take an interative approach to everything else and there are lots of problems with government and and you know um part of the reason that I wrote this book is because I don't think government by itself is getting to the right place and I think we need to have more of the public involved in making the right choices about these things you know it's interesting you mentioned that government by themselves won't be the counterbalance uh to the private sector and that's interesting because you know I think a lot of politicians if you talk to them they're like oh yeah you know I'm going to do my stint here then eventually I'd like to go do a stint at an AI company and you hear about the you know kind of the revolving door and so how do you see us as combating that do you think like consumers in aggregate are a strong enough voice to be that counterbalance I think consumers need to wake up and be that counterbalance I think you're exactly right about the revolving door you know when I testified in the Senate a year ago every senator in the room seem to appreciate the value of having strongi regulation but here we are 12 months later and the EU has done something but the US Congress has not so the president actually introduced a executive order that was as strong as he could but the nature of the Constitution of the United States is he can't make law so the things that Biden put forward are basically reporting requirements and things like that and getting um offices um Chief AI officers in different parts of the government all of that is to the good but we ultimately do need some constraints I also talked about the need for international collaboration on AI governance people were warm to that in the Senate bipartisan you Republicans and Democrats but nothing has actually happened and the public needs to realize that if it goes down that way we're going to see the kinds of ways in which social media has disrupted Society but they're going to be worse AI you know could easily do much more to privacy really escalate cyber crime possibly disrupt democracy itself the only way anything is going to happen is if the people stand up organize um put their money where their mouth is and say this is important to us a common criticism of uh of Regulation also seems to be uh pitting the us against China right like hey if we like prematurely constrain innovation in the US well our adversaries are going to get ahead what do you have to say about that I mean in many ways China's actually ahead of us in regulating AI so for example they had a copyright decision recently where a trademark character was ripped off by an AI system and the Chinese Court said you can't do that so artists are actually probably better protected in China than they are in the US right now there's something that China does that I think is indefensible that I don't think we should do which is they demand that their LMS tow the party line I I think that's absurd we should never do that but in other ways China actually has some of the same concerns as we do they don't want to deploy systems that are going to completely destroy the information ecosphere cause massive amounts of cyber crime and so forth what would really change the world is getting to AGI first and that's a question about research priorities I think the United States has historically LED Innovation if we stop pouring all of our money into llms just because they're the shiny thing and placed our bets on Research more broadly that we could easily win that race and I hope that we will so building off what you just said about research priorities let's bring It full circle um this is pretty technical but I think it's important for us to get into right now we're on this path of generative AI models that are based on these massive neural networks which means they're kind of a black box like we really don't know why they do what they do this is a huge departure from more rules-based symbolic AI systems which would allow us to see exactly how a system makes a decision so from a technical perspective what is the direction you would like to see the AI industry go after if not towards this current Paradigm of generative AI I mean in my personal opinion I think the most profitable thing to look at and it would take a while is neuros symbolic AI that tries to combine the best of the neural networks that are good at learning with the symbolic systems that are good at reasoning and factchecking like I think this has to happen and I think it will happen and whoever gets there first I think is going to be at an enormous Advantage I don't think though that it's like a two-day project like I think it's really really hard um in part because the neural network side is not interpretable so if it was interpretable you could just dump it off to the symbolic system but it isn't we don't really know what's inside the black box and so building the bridge between the blackbox world and symbolic AI is really hard and we probably need a whole bunch of Innovations there but I think in the long run that's what you want you don't want just AI that can write first draft you want AI that you can trust that you can count on and I don't see how to get there unless you can check facts against known databases and things like that you need an AI that can be consistent with the world or at least give an explanation for why rather than just one that dumps something out it's also interesting because in a sense like the battle lines are drawn right people use terms like oh you're either an accelerationist or a deceleration right you know if you're actually an accelerationist what you should do is to divorce yourself from llms and say want AI to work but this isn't it like if you want AI to save the world you should inspect every advance and say is this actually getting me there or is it wasting my time in terms of people wanting to pause AI like I don't think we should pause research but Genai is causing problems and we at least need to think about that I think it's good that we have a public conversation comparing the risks and benefits and if they were high enough like if there's enough cyber crime and disruption of of democracy maybe you know we should reconsider whether this is really worth it and yeah it helps people with brainstorming but like helping people with brainstorming and maybe coders writing a little faster and so forth might not actually offset like a major disruption to democracy like we should actually think about that I agree I mean there is good that you can do with these models you're obviously giving the co-pilot example of like sort of assisted coding where it can kind of do it like reasonably enough and getting it to 80% certainly saves humans a lot drudgery and that's exciting and so overall I have a techno Optimus B but I think where I struggle with sort of this polarity of like you know accelerationist versus the Dell folks or just that framing is that like even if you're a techno Optimist that doesn't mean that you can't be like open and acknowledge the limitations of these models yeah I I mean it's weird for me cuz I actually think I am a techno Optimist you heard it here first but you know not too many people know about it it's like we all agree on the Northstar and we should be solution agnostic of how we get there and not turn it into this like dogmatic holy war of like you know who's who's like Pro or anti-tech and I think just everything is so divisive today and the polarities prevents actual Nuance discourse from happening that's absolutely right and it goes back to social media right it's very hard to to even put out a nuanced view on social media it actually goes back to the tech industry itself kind of accidentally creating those conditions that makes it hard to have a good conversation well hey look I think there is no shortage of hype in the AI space and occasionally we do need you know some cold water thrown on our faces so Gary I'm so glad you're out there talking about the other side of all of this and just being cautiously optimistic of how we proceed that's what I'm taking away that you and me are both cautiously optimistic I think you're painted as being far too negative but honestly if you like delve into your arguments not to use the delve word I totally used that before chat GPT I swear um you know I think you'll find your position is a lot more laed well thanks for giving me a chance to explicate [Music] it so is AI going to save us or make us go extinct well according to Gary Marcus probably neither and I have to say I agree with him in the AI industry Gary is a pretty polarizing figure as for where I come down on all of this I consider myself cautiously optimistic about artificial intelligence I don't think AI can solve every single problem facing Humanity but I do think it's one of the most powerful tools at our disposal to impact real world change and while I do recognize the flaws and risks of generative AI I also tend to believe it opens a lot of exciting and meaningful doors especially for creatives I mean it's a really cool tool that can bolster human talent and honestly automate a lot of TDM we consider to be modern-day knowledge work where we run into trouble as Gary says is when we start confusing the appearance of thinking with actual thinking the appearance of reasoning with actual reasoning it's just not the same process when we open up the hood and so we're not going to get the same result these large language models are amazing mimics but they aren't yet able to reason from first principles much less explain their reasoning to us and the risk that emerg in the gap between the appearance of thinking and actual thinking are real so the decades long Pursuit for artificial general intelligence will continue but it's okay to admit that the current path may not be the only way to get there and instead it's important to remember that there are a range of views on which direction AI will go and how we'll get there many of them far more nuanced than unbridled accelerationism or hyperbolic doomeris we can be excited about the possible futures for AI but also practice healthy skepticism and remember that we as the users the customers of these companies help decide the AI future we want to see the Ted AI show is a part of the Ted audio Collective and is produced by Ted with Cosmic standard our producers are Ella feder and Sarah McCrae our editors are B banang and Alejandra Salazar our showrunner is Ivana Tucker and our associate producer is Ben Montoya our engineer is Asia polar Simpson our technical director is Jacob winnink and our executive producer is Eliza Smith our fact Checker is Dan Kachi and I'm your host baval sadu see y'all in the next one