AI therapy is here. What does it mean for you? w/ Dr. Alison Darcy and Brian Chandler
Why this matters
This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.
Summary
This conversation examines core safety through AI therapy is here. What does it mean for you? w/ Dr. Alison Darcy and Brian Chandler, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 35 full-transcript segments: median 0 · mean -1 · spread -16–8 (p10–p90 -10–0) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.
- - Emphasizes alignment
- - Emphasizes safety
- - Full transcript scored in 35 sequential slices (median slice 0).
Editor note
Useful mainstream bridge episode for teams that need a shared baseline quickly.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video Cttldd0bmfw · stored Apr 2, 2026 · 1,003 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/ai-therapy-is-here-what-does-it-mean-for-you-w-dr-alison-darcy-and-brian-chandler.json when you have a listen-based summary.
Show full transcript
[Music] like many of us Brian Chandler was struggling with his mental health during the 2020 lockdown yeah so the beginning of 2020 I couldn't go anywhere and I knew you know in the past that I had some anxiety but I was always able to find things that I could kind of use to distract myself you know get out do things but when I was stuck at home it was like I couldn't run away from that anymore and I was forced like so many other people to confront that anxiety and from there I was just seeking something that could help me you know I was trying to read books on Mental Health um I was trying meditation I was trying all kinds of different apps and robot just was in the mix of those apps I downloaded robot a therapy chatbot users type in how they're feeling the chatot deciphers what kind of problem they're talking about and then offers a prescripted response so if wot asked Brian how he was feeling and Brian said something like frustrated or annoyed wot would chat back asking Brian to rephrase his feelings in a more concrete way sort of like the way a therapist might to promote deeper self-reflection and I remember you know one afternoon and I opened up the app and I just went through the prompts and afterward I found myself feeling a little bit better there's something about a chatbot therapist that can make people a little squeamish you know it's the idea that the messiness and intricacies of the human mind can be help by something entirely unhuman but the power of therapy chatbots is nothing new as most of the coverage of wobot has reminded us the very first chatbot ever was designed to mimic a psychotherapist in the 1960s MIT computer scientist Joseph weisen bomb created a bot called Eliza largely to make a point Eliza was supposed to to emulate a kind of talk therapy where the practitioner would often repeat their patient responses back to them if the patient mentioned their father the bot said how do you feel about your father when a patient said my boyfriend made me come here the bot said your boyfriend made you come here and if the bot didn't know what to say next it would just say tell me more weisen bomb's whole point with Eliza was to show just how bad robots were at understanding the complexity of human emotion but his experiment totally backfired turns out the people who got access to Eliza were so entranced by the bot's active listening skills that they wanted to keep talking to it in fact weisen bomb's secretary asked him to leave the room so she could be alone with Eliza could it be that humans are just so bad at listening to each other that we're willing to convince ourselves that robots really can understand what we're saying or maybe human emotion really can be broken down into repeatable patterns data that these chatbots can respond to effectively regardless now that mental health AI is back on the rise we have to ask the question are we looking at a future where AI systems replace human therapists altogether I'm baval sadu and this is the Ted AI show where we figure out how to live and thrive in a world where AI is changing everything [Music] in this episode we're digging into therapy chatbots there are many on the market based on different therapeutic models and with different capabilities but today we're going to focus on robot which we should note now requires a unique access code from your healthcare provider or in some cases from your employer it's far from the only therapy chatbot though and as these Bots become more sophisticated and accessible it seems more and more likely that they could disrupt the field of mental Healthcare later in the episode we're going to circle back to Brian Chandler the wbot user about his experience with the bot and why he finds it effective even without additional therapy but first I'm speaking with Dr Allison Darcy founder and president of wobot Health when Allison was working in a Stanford Health Innovations lab she and her colleagues tested a whole bunch of ways to make mental health care more accessible and engaging they try to gamify it using immersive video experiences and even Explore face-to-face models but it turned out that the most effective option seemed to look a lot more like Eliza Alison walk us through what is wobots and how did it come to be okay wobot is a an emotional support Ally it's basically just a chatbot that you can talk to during the day um that that helps you sort of navigate the ups and downs so it's explicitly not therapy um very very different but is nonetheless BAS B on constructs uh borrowed from the best approach to mental health that we have today so how did it come to be I guess I was um a a sort of a clinical research psychologist and at the same time I think I had this brief moment in my early 20s of um learning to code and sort of being in that um Dev world I mean very brief moment and there's something called the research practice gap which is really a sort of intervention science problem which means whatever we do in the lab doesn't actually get translated well in the community and so you have all of these people and actually a growing number of people with problems and uh fewer and fewer resources um they're available uh and those that are available just hard to access and expensive and stigmatized and actually you know in learning about really great cognitive behavioral therapy it was very clear like why are we sort of keeping this up into the clinics like should we not be teaching this stuff in schools should this not be preventative could we not do it in a way way that's scalable before it becomes a very clinical issue and um so we set out to kind of try and look at ways that we might go about that like how can you develop that habit it was like how can we make good thinking hygiene as engaging as it possibly can be so that people will want to interact every day you said it's based on a therapeutic model of CBT but not explicitly therapy um what is the user experience of using wobot today and how is it different from a therapist that's a great question and one we get I think not enough actually in the case of mental health you know we you almost always hear you know like if you're struggling reach out to someone but the experience the lived experience of being in a difficult moment it actually is the hardest moment to reach out to somebody else and so you know really what we're doing is meaningfully designing for that moment how do you make that as simple as it can be and it turns out not being a human is so important to that equation as was demonstration by a different research group in Southern California people are more likely to disclose to an AI than to a human and while that might sound distopian to some folks it's not that they're choosing the AI over a human really the choice is how can I do something right now to help myself whereas um the architecture of therapy is just obviously completely different and it is based on fundamentally a relationship right the human to human relationship that's not what this is we have certain relationship Dynamics but that's really about being able to facilitate the encounter when it occurs but in much much smaller simpler nuggets as you live your life in fact interestingly our data shows that about 80% of conversations are happening between 5:00 p.m. and 8:00 a.m. the next morning so absolutely when there aren't other folks around other professional folks around um and so that's actually why it works this is aad out a a sort of a toolkit that folks can use momentarily as they as they feel like okay I'm in this rotten place they can just reach out to this thing and be like hey I'm not doing great and then WBA can say okay do you want help right now with this thing and then okay if you accept that then step by step talking somebody through how to use their own resources to maybe feel a little bit better and then just get back to life right you know I really like this notion of sort of meeting the person where they are uh in that moment you're perhaps most vulnerable where it's hardest to reach out to people and while it's fresh in your mind you're you're living that experience it's playing out in real time the product person in me is curious how do you measure success and what are what are those like success metrics for the user experience uh right now uh that you do optimize for if it's not about engagement it's about feeling better that's It ultimately that the the the ground truth to us is like do you feel better now or not and and then it's a sort of a well how much better do you feel and if you feel better oh this is great right let's let's kind of build on that and if you don't okay let's troubleshoot that what went wrong it's a tool set you know um and I think one of the beauty one of the beautiful things about a wobot is and an AI in general um in this role of kind of guide based on CBT as an approach is that the person is doing the work right there's no I'm bringing some extra wisdom that you don't have about yourself right like I'm reading your pals almost like that's not what it is it's like this is your this is your skill set I'm just going to step you through it and so they get that experience of having stepped through it and then saying oh wow yeah I actually do have the answers I just need to be asked the right questions to get there and I think that's tremendously powerful because I think some of the Dynamics that we can sometimes see um in maybe more clinical settings is a sort of a diffusion of power to some extent interesting so there's like a different Power Dynamic um it's almost like because it's a bot you're constantly remembering that it's up to you to take the steps to improve how you feel like you're in control you have the agency so besides the unconstrained access to this resource what can a trap bot do that a human therapist cannot do the challenge of those kind of that kind of line of questioning is that it's almost set up like a replacement and I think you know it's it's it's clearly not a replacement but there are things that an AI can be great at uh and that that a human can't and availability is definitely one of those things um Perfect Memory is another one of those things never getting tired never retiring never having a bad day never being hung over right those are all good things that AI can bring to the table to flip the question like what can a a human do that an AI can't connection you know and I think that is so clear AIS can never be human and they shouldn't pretend to be cuz they're best when they're not pretending to be and I think if AI can actually just lean into the fact that they're AIS it's it's a lot less complicated for our heads to get around so you know it sounds like you're clearly making this design decision right how is that design decision being reflected in the product experience for wobot I think we made this design decision very early on that like w should be like very clear that it's a robot like I'm an AI I'm an AI and like we really leaned into it I think more than most people even the wot as a name and as a visual is like to remind people this is an AI this is not human um but in the experience itself I do think there are specific things that one has to pay attention to that is that is nuanced in AI for example WBA becomes concerned with some something wot should say you know because you said this thing I'm concerned about X right so you're showing people this is the phrase that I'm worried about wbach can say you know I'm um I'm not able to in fact give you some medical uh medical advice here right like or I'm not like I'm constrained here just being very very clear about the boundaries um what it is and what it isn't and I think that's where good consultation with clinicians and and specialists in the field comes in can you talk a bit more about how this product will work in concert with let's say a traditional therapy experience right and and like you know do you think wot could be sort of this Gateway allowing users to start opening up without this pressure of a real person at the other end receiving this stuff without the fear of judgments and um and then eventually transitioning to inperson therapy has that happened what does that look like this was the point of of of a wobot it was how could you be the most gentle unintimidating onramp into into the experience of you know managing one's own mental health really early on and and hopefully in a way that demystifies what full-blown therapy might look like as well and actually that's we've had lots of anecdotal feedback from from users to say yeah actually using robot made me see like what CBT with a therapist might look like and and then to your other part of the question like what does wot look like in conjunction with you know a a clinician or a healthc care professional um and we've had lots of really interesting feedback here as well when they give wot to their patients um wot comes back sort of ready to engage in a therapeutic process a little bit better informed that the things that they're sharing with their patients then wot is um sort of facilitating the practice with those Concepts or those skills in between sessions because therapy doesn't really happen in a void either the more opportunity people have to practice certain skills based on CBT the the better their outcomes tend to be and so Whata can facilitate that practice um you know s of reinforce what the therapist is sharing and and and teaching right now it feels like it's it's very much intentionally a sort of Choose Your Own Adventure on Rails experience um for plenty of reasons right I'm sure you were pressured enough to be like hey Allison we got to use the latest generative AI model Etc and so yeah talk to me a little bit about where you see things going perhaps in the near term but let's then let's start talking about the long term too yeah I mean yeah you've hit the nail on the head there are so few opportunities now in our interactions with Technologies or or or elsewhere to learn how to objectively sort of challenge our thinking and I kind of worry about that being lost as a skill you know because our almost like our emotions are hijacked with a lot of online platforms as we know and um you know there's an awful lot of like very strong opinions and not so much opportunity to say huh well is that correct I think that's exactly you know what we'd like wot or how we'd like wot to operate um the future our objective function is human well-being right and it and sort of agnostic to how we get there within reason you know we want to use the best tools that are available to us to be able to get us there and I think having that objective function be based on Wellness not attention is so crucial and that's why you know that's why we operate as much as we can you know um in partnership with healthc Care Professionals and those um those settings and Health Systems because then your incentives are aligned in the right way but um yeah look I think when Technologies are available to us that enable us to do a better job there uh we'll use them right I think we've been in terms of like generative AI we still have our writers write every line that wot says um and we have we just finished a trial where we looked at a a sort of llm based version of wot versus the rules-based wot and that was just fascinating from the that was really just to explore the user experience and like where's the difference here exactly and like you know would there be some glaring limitation that can only or some huge gap that could only be filled with with llms um and what we found was actually interesting and intriguing in spite of that context of like twice the accuracy we found the H like our users didn't seem to notice a difference so in the context where there's equal amount of trust and an equal amount of like the feeling of that this is a safe space and not being judged and so on yeah maybe the accuracy doesn't matter so much because honestly we've built the conversation over the seven last seven years to be so tolerant of imperfection you know I think for example if wot sort of thinks they're hearing something they'll say oh like it sounds like you're talking about this thing am is that true like are we talking about relationship problem here is that what I'm hearing am I hearing you correctly and that's just very empathic conversation right so if that's what wot says in response to a low confidence classification that's fine right and even if you have more accurate reading there wouldn't you always want to say am I hearing you correctly like you know that's that's what good empathy looks like because even humans don't hear properly it is interesting that yeah humans perhaps are very good at figuring out sort of the implicit rules of what they're engaging in and just working around them especially if you set the expectation that this is not a human at the other end so they're not pretending like they're trying to have this thing past the the Turing test or something like that right it's it's never been about that yeah and it's like and people think well they might be let down by the fact that it's not a that it's not a human and actually no no you're missing the point it's like it works because it's not a human or and not in spite of it also it reminds me just you have a hammer everything looks like a nail and everyone wants to remat and everything with like you know Transformers and diffusion models these days and it's interesting because you know they do use a lot more compute and and you've got a brilliant case study here perhaps where good oldfashioned AI is is like good enough to get the job done it reminds me of this quote of yours from this recent article where where upon being asked about gen you said you know we couldn't stop the large language model from just budding in and telling someone how they should be thinking instead of facilitating the person's process so I'm kind of curious like can you imagine a path to getting there with AI where AI could do just a good job as perhaps a a real life human embodied therapist an AI is never going to deliver what a human therapist does so recently somebody sort of said to me but you know like an AI can't like pick up on you know body language signals but you know the jury's out on how much an AI needs to be able to detect that particular set of non-verbal communication because because it's the fact that it's it's a human to human relationship is why the therapist needs to be able to read all of that stuff because people don't feel able to disclose to a human all the time do you know what I'm saying so I like it's a fundamentally different kind of encounter it's just a completely different way of engaging and I think one that humans are very sensitive to the idiosyncrasies of um and so I think the things that people think of threats well someone's going to get addicted to this and they're never going to go to a human therapist if they get really complacent with this I don't think those things are true because they're people don't see them as the same thing right that it's not like because people start eating sandwiches they never go to a restaurant again because they're like well I'm fed I'm I'm hooked on sandwiches you know that's just that's just not the way it works those things kind of coexist really well yeah I could see that being super beneficial but you are drawing this line about you know I think I like an I would never do that and I just want to poke on that claim a little bit because what you described is you know let's say you know me as a patient if I go talk to a therapist there's stuff that I will explicitly say and then there are these more implicit signals facial cues how I react to stuff um what's to stop AI models from being able to understand all of those nuances right like already if that happens would that not come close to encroaching upon that therapist um you know kind of patient relationship okay I got say something controversial why not well let's do it h I think this idea of like um divining people's emotional state is a bit of a red herring honestly I think for a lot of not all but for a lot of mental health because here's the thing the there's a reason why we have self-report measures for mental health and people say well it's not objective but the point of mental health problems the the actual way that we conceptualize them today is that they are fundamentally a subjective experience there has to be subjective suffering so the the the correct question is you know how do you feel that is the right question to ask for a start I now I'm oversimplifying I think there are times in which you'll want to look at the discrepancy between you know the person's self-report and and and what they ing but I think for you know maybe 80% of mental health problems um I think it's preferable to ask somebody and start there the other thing is um yeah you're totally right AIS can pick up on nonverbal communication it's just not the same set of non-verbal communication that a human therapist would look at do you know what I'm saying maybe the speed with which somebody's answering a question you could pick up a nonverbals just in the way that somebody's texting there's a whole set of non-verbal communication that might be relevant here for sure that AIS are great at picking up in fact we've seen it in um some of the algorithms that have been used to predict first episode psychosis among high-risk groups um and I believe it's an NLP algorithm was able to predict with 100% accuracy who would go on to develop first episode psychosis compared to I think the the the gold standard measure which is is about 67 or 68% um accurate so and that makes a ton of sense because again um because so much of humano human communication is nonverbal but the the idea of like let's replicate what humans do I think is misguided and leaves leaves a lot of value on the table you know you talked about the research practice Gap it's like hey all this awesome research is happening but like practitioners aren't actually reading up on it and like making the latest you know insights available to their patients and so wobot is this very cool compl to therapy in that sense where you can bridge this uh research practice Gap make the state-of-the-art findings available um you know to folks but I'm curious you know like uh um it means that people will use wot without access to a therapist do you think there's any risk to users if they're using wobots without a human clinician and this is in full disclosure my roundabout way of like you know coming back to the question of uh is this really not going to replace therapists you know [Laughter] thinly veiled this A robot's no longer available for people with that you know uh just to download in the App Store you have to sort of get it through treatment providers we've done a lot of research and a major objective of that research is to look at the user experience in a very controlled manner to very carefully quantify any risk or any you know um safety issues that might come up and things like that this is why wot is primarily rules based right now and everything wot says is written by a therapist but I think the risk the major risk associated with this technology actually is that we don't have the correct conversations about it and that people get spooked and think oh this all the missteps that people inevitably will make because they really underestimate the complexity of what it takes to deliver something you know a mental health um uh based intervention into the world uh there isn't adequate data somebody makes a misstep it blows up on the Launchpad and then everybody starts to think wow this this notot of good technology used for this whereas really AI any AI that you're using is really a tool and and how you use that is the most important thing and I think you know the risk that we have facing us is like we are systematically going to undermine public confidence here in this in the ability of Technology like this to help and that is a big potential problem because I think this is probably the greatest Public Health opportunity that we've ever had there's a lot of responsibility on your shoulders to you know make sure this is a shining Beacon and like a great example for how you do you know AI augmented therapy essentially in terms of ways this could blow up on the Launchpad right um obviously data privacy is one that comes up and wot when people use wot people are sharing some of their most personal and private data so will user data be used to improve the wbot product experience or the underlying models we're hippoc compliant obviously we're um and you know using like all the data are encrypted and it's uh you know we have consent for each and every use like similar to what you have with with gdpr privacy and security is a topic that is absolutely front and center the whole time because I think a breach there from any kind of semblance of any kind of negligence on our side would be catastrophic Going Back to the Future a bit in an Ideal World Allison what would Mental Health Care look like in 5 years time I'm curious in an Ideal World we would shut down all our clinics and my profession would become obsolete because everybody is looking after their own mental health we should be doing such a good job and everybody is so you know happy and healthy now that's not realistic IC you know remember in Co we were talking about flattening the curve I think we need to flatten the curve here as well we need to try and keep people out of clinics if we can by you know providing access to really good preventative you know tools that they can use we should absolutely not be waiting you know a decade or so before somebody for before first sort of people start to struggle a little bit with maybe a couple of symptoms here and there and then them needing to see and then actually getting in front of a clinician people should have I think very good evidence-based tools that they can turn to from the first moment of like you know intense emotion that gives rise to distorted thinking because that's part of the human experience it's not about being in a clinical realm it's not about necessarily needing a diagnosis it's about sort of being there in a moment of need as early as you possibly can and getting somebody well when you can um and then freeing up our precious human resources for when people actually do need more significant help I love that we've got a bunch of technology that we use every day that you know plays to our hopes wishes desires anxieties worries literally all the time without us even knowing and so it strikes me that there should be a countervailing influence to that you know a correction measure to that and um you know starting as early as possible and making it as accessible ible and frictionless to get access to this type of evidence-based care strikes me as one great way of making a dent towards that goal you have and who knows maybe you will accomplish it in 5 years well thank you very much as you've heard Dr Darcy insists that wot is best used in conjunction with a human therapist but after the break we're going to hear more from Brian Chandler who's actually been using wobot without additional therapy since 2020 so my name's Brian I'm uh about to turn 25 here and I've been seeking mental health and honestly mental Clarity uh I would really say since the pandemic so it's been about four years now I've been kind of working on my mindfulness Journey do you remember your first interaction with robot when I first used it I guess I had such a low sucess access rate with the the other apps I wasn't expecting too much but when I used it I I did feel better you know and I thought okay well maybe you maybe this was just a fluke uh let me let me get back on the app the next day and I had a similar feeling and I was like okay well let me get back on the next day and you know eventually when it's three four five days of feeling better afterwards you start to realize okay I I think it is the app I think it is is what it's teaching me it's it's teaching me coping mechanisms it's teaching me how to label it's it's doing exactly what a therapist would tell you to do U but I'm doing it from my phone for free in the comfort of my home and it was very convenient because at the time I couldn't go anywhere can you describe your typical interactions with wobot do you find yourself using it in a certain way I I typically like to use it really twice a day so I I think it's really important to start the morning off right with the right head space reminding myself that we we do have some control of how the day can go or some control of at least being mindful of our thoughts and understanding that anxiety doesn't have to have power over you that kind of helps me get the day on the right foot and then especially lately I do like to close the day off using it that way before I go to bed I am still in the right head space so really trying to have those two anchors of the beginning of the day and the end of the day but that is the nice thing about the app is it's 24 hour I mean you can use it whenever you want you know so there were certainly a few times where maybe I was having a panic attack at you know 2 A.M I open up the app and it would help me it was kind of like a reset so you're kind of checking your you know the monkey mind if you want to call it that you know starting the first thing in the morning which uh is an analogy I really really enjoy you when you wake up in the morning kind of trying to gauge how your mental heads space is to weather you know so if maybe you're waking up and you are you're grouchy or maybe you're depressed or maybe you're anxious I mean you can kind of use the analogy that okay it's today's it's thunderstorms you know and what can I do to better prepare myself maybe you're the type of person where I need a distraction I need to hang out with a friend or maybe that's the opposite of what you need and you're like I just need to be by myself today I'm just curious like do you have any experience with conventional therapy so prior to using wobot I didn't have any experience so I didn't really have a reference to regular therapy um in 2022 I believe it was 2022 I wanted to see really if I felt a difference and I did try for a couple weeks regular therapy and while I think regular therapy can be very good and very important for some people I don't want to discourage that for me personally I didn't see really a difference between using the app and regular therapy and again I know regular therapy is fantastic I think if you need it I definitely recommend it but sometimes it's inconvenient you know sometimes you can't just talk to a therapist whenever you want and you're doing the same practices that you're doing on this app that's free not to mention therapy can be very costly so I just figured if I'm I'm doing the same practices I'm feeling the same relief after I use it and I can use it anytime without leaving my house just on my phone it was a no-brainer to continue using the app now I have to follow that up and ask in therapy there are all kinds of boundaries right there's time there's what you can or can't or perhaps shouldn't know about your therapist do you get a sense of boundaries when you're using robots if so what are they in the beginning when I was using it especially because I didn't really have experience with therapy I I thought it was it took a moment to get used to it felt natural but you did kind of feel like you know this kind of feels like I'm talking to a human but the more you use it you realize it's it's not trying to be anything other than what it is you know it's it's Ai and like you said there are some bound because there are going to be some things you know the app might not understand about like The Human Experience but I think it's programmed in in such a wonderful way where it's never pretending to be more than AI if that makes sense so as far as the boundary goes I would just say you you know okay this is not a human if this is an emergency 911 situation I I need to you know reach out to to a human but um no but the one nice thing about the app is you can use it anytime you know so you don't have to deal with that boundary you would have to deal with with uh traditional therapy are there any telltale signs to you that it is AI like how do you know it's never pretending to be something more than it's not uh a lot of it is how how it frames in words certain things it it does frequently tell you honestly probably each time you use it that it's a robot or it might make a joke of you know something to that degree you never feel like it's trying to be human it it does a good job just this is research this is what works and you can take that at face value you know so if I had to ask you knowing you know what we know now and what you said how would you define your relationship with wot h I I guess I would describe it as my mental health companion do you think you'll continue using wot for mental health care yes I I think I will continue to use it you talked about mindfulness earlier on have you have you tried some of the meditation apps out there like headspace Etc yeah so I think meditation is great but it is a totally different pace I I enjoy the app calm I I think that's a very good app I've had a really good experience with that uh but it's just I've found wot to be a little bit more helpful in situations that are a little bit more urgent I am kind of curious um like more of a hypothetical question right if you could have an AI model understand more of your life and kind of give you contextual advice right maybe it involves you sharing all your conversations during the day your emails your text messages is that something that you'd be interested in if you could get sort of contextual advice through the day really tailored to your situation or would it be creepy it wouldn't bother me if I knew the information being stored was safe so I I think that needs to be a priority going forward that the information isn't being sold the information is being stored safely and then I would feel comfortable with remembering because I know it's going to help with the tools in the future taking this technology to the limit when it does get better when it can understand your context when it it can be respectful of the data that it collects would you want wot to evolve into something that feels more like talking to a human or would you rather that it stays in this very clean delineation of a tool in this moment in time I think I would rather it stay I would like it to evolve but I don't I don't don't wanted to ever get to the point where you know maybe they like add a voice and you're talking to the voice and it it sounds very human like I don't think I would like that but kind of like what I mentioned to you it's just all of this technology it's so new and it's we just aren't used to it so I don't know if in a few years that's just going to be the new Norm but right now I do enjoy kind of the boundaries the app creates where you know it if if you were needing a human connection go talk to a human I think it's so important for people to be able to work on their mental health and especially in this day and age where we're spending more and more time on our phones we need to have a moment where we can put Tik Tok down and go to something that's going to benefit us Brian thank you so much for your time it's amazing to have you on the show yeah thank you like many of you I've been on my own mental health Journey over the last decade I've really gotten into the work of Alan Watts and Guru nanic and I've cultivated a meditation practice and probably like many of you I got started by hopping from one app to another with a goal of keeping myself centered and I know I'm not the only one far from it and you know it makes me wonder if so many of us are already using apps to seek mental calm and Clarity it might not take a lot more convincing for us to start using apps like robobots as of now ai therapy is a tool and like many other therapy tools its mileage will vary from person to person but as this Tech continues to advance that gap between the in-person and virtual therapy experience will also continue to close you literally got the supercomputer picking up on every single intonation Nuance voice change and flexion is that necessarily a bad thing I don't think so mental health is such a problem and if we've got some technology that can help us tame our monkey minds I think that's a win [Music] if you want to know even more about the history of the therapy bot Eliza 99% invisible made an incredible episode about her the link to that episode will be in our show notes the Ted AI show is a part of the Ted audio Collective and is produced by Ted with Cosmic standard our producers are Ella feder and Sarah McCrae our editors are Ben bang and Alejandra Salazar our showrunner is Ivana Tucker and our associate producer is Ben Montoya our engineer is Asia polar Simpson our technical director is Jacob Winn and our executive producer is Eliza Smith our fact Checker is Christiana PARTA and I'm your host baval sidu see y'all in the next one [Music]