- The Adaptive Mind
- Posts
- Navigating AI in Digital Mental Health with Eliane Boucher, PhD
Navigating AI in Digital Mental Health with Eliane Boucher, PhD
Navigating the Future: The Promise and Perils of AI in Digital Mental Health with Eliane Boucher
On this episode of The Adaptive Mind, we're joined by Eliane Boucher, a social psychologist and behavioral scientist who transitioned from academia to the digital health industry after a decade in traditional research. Eliane shares her insights on the rapidly evolving landscape of AI in mental health applications, distinguishing between ethical implementations and concerning trends in the industry.
After listening to this episode you'll learn:
How AI has historically been used in mental health applications and how it's evolving with generative AI.
What ethical considerations companies should prioritize when implementing AI in healthcare settings.
How to identify "bad actors" in the digital wellness space who make unsubstantiated claims.
The potential for AI to personalize interventions while maintaining appropriate human oversight.
Why transparency matters in algorithm design and the fine line between helpful AI and uncomfortable interactions.
Join us as we explore Eliane's perspective on creating responsible, effective digital mental health tools in an AI-driven world.
Episode Transcript
Brady: today I'm joined by Eliane Boucher, who is a social psychologist and behavioral scientist who spent the first 10 years of her career in academia before transitioning to the digital health industry.
Eliane, welcome and thanks for joining.
Eliane: Thanks, Brady. Thanks for having me.
Brady: Yeah, absolutely. So I guess to kick it off, I'm kind of curious how you got into this whole world and how you started doing what you're doing.
Eliane: Yeah, I feel like it's a question whenever I appear on a podcast. you know, I really kind of fell into the world of digital mental health. I'm very honest and transparent about that. So I spent, like you said, the first 10 plus years of my career going the very traditional academic path, was teaching in psychology programs and really didn't see anything else I was going to do, but had the rug pulled out from under me when I was denied tenure. So I was kind of then left with like,
what do I do next? And I was a bit cynical about academia at that point, know, like having had that happen to me. And I also didn't want to move. I didn't want to pick up and like leave and start somewhere else from scratch. So I was looking for kind of non-academic roles where I could still do the things that I love to do, which was research, which was writing. And I came across this posting for a research associate with a writing emphasis at a company called Happify.
really had no knowledge of digital mental health at the time, but it was doing research, it was publishing papers, and it was in an area of mental health where I also felt like the work I was gonna do was gonna have an immediate impact. It was going to help people in a way that I couldn't do as an academic, and that really excited me. So that kind of like spurred me getting into this industry, and then I was really fortunate that I worked with some fantastic people who took a chance on me and let me grow and
I went from a research associate with a writing emphasis to overseeing research strategy at the company at one point. So it's been a wild ride of five years. I think it's been a wild ride in digital health period over the past five years. But I think it was kind of one of those places where it's like, maybe this is where I was meant to be and kind of fell into it by chance.
Brady: Nice, yeah, that's cool. And I feel like the wild ride is just getting started or continuing, of course, with the role of AI and mental health. So let's talk about that. We're seeing a lot of conversation right now, but AI has been used in mental health and in digital health in general for quite a while. What are some ways that
that it's been traditionally used maybe for people who aren't necessarily technically privy to how AI has been used.
Eliane: Yeah, I think it's interesting that you mentioned that it's getting a lot of attention now and you are seeing a lot of companies like Headspace, believe just announced an AI kind of chat bot AI companion recently. Unmind, I think, has also been focusing on it. I think there's been another few companies that over the past year or two have really kind of been very vocal about.
their investment into AI. But like you mentioned, this isn't really something new. So when I joined Hapify at the time, which then became Twill, they had actually just finished a pilot RCT testing their AI chat bot that was called Anna at the time. And there were other companies that really were doing AI chat bots as kind of the central feature of their programs from the get go. So companies like WoBot, companies like WeSA. So
you know, this has been around as long as those companies have been around. I think what's changing is that the way that AI was used, even when we think kind of in this narrow fashion of AI chatbots, at least the AI chatbot that we used at Twill, it was an AI chatbot that was trained by clinicians. And it was kind of trained to respond as a clinician would, but was really operating in a wellness space and was really just there to be kind of
a way of delivering the activities that would be more engaging for the user. And that's exactly what we found. That was when people engaged with the interaction or the interventions that were delivered by the AI chat bot, they actually wrote more, they used words that were more on target. So the whole idea was we can get them to engage more deeply by having some accountability through this AI chat bot. Same thing with WoBot at the time, it was really kind of just trained to respond in a particular way.
But now I think what we're seeing is the advent of generative AI in these same kinds of ways. So we might have the same kinds of AI chatbots, but now instead of being kind of trained with kind of closed, predicted responses that might be, know, fine tuned to be personalized to the user, we're now talking about generative AI that will just create those responses on its own, that will create content on its own or interventions on its own.
And I think that really opens a whole new kind of dynamic we need to talk about. That I think is like the public facing AI, but then there's also a lot that happens behind the scenes in terms of AI. So a lot of the personalization happens by having users answer questions and that that gets fed into some kind of model that might decide recommendations to make to them, whether that's programs that they should do, whether that's...
how the AI chat bot, there is one response to them. So I think there's a lot of those tools happening behind the scenes too that users may not even realize the intervention is using AI because that's not public facing. It's all kind of behind the scenes. And I think that's been around for a long two and that's evolving.
Brady: Yeah, absolutely. it sounds like you said there's kind of two aspects. One is what you just talked about was these recommendation engines, which really you see almost everywhere. Like if you open up YouTube, people are more and more familiar now with TikTok and Instagram and the feed that you get. And those feeds are using AI. They're, you know, tailoring that experience to
your interests and your interactions with the app. And it's, you know, feeding that into some kind of algorithm, which is then kind of spitting out something for the users. So there's that piece that you mentioned. And then you also mentioned that these chat bots have been around for a little while. And it used to be, it sounded like you said that it used to be that there was almost like an if this than that, like this guided experience.
And now it's kind of just like opened up to be a lot more adaptive and a lot more kind of free flowing, if you will. Is that right?
Eliane: Yeah, or I think that's at least the direction that people are thinking about. I don't know how many companies have made the full kind of switch over to that, but I know that it's where things are headed because they want it to be even more flexible and even more adaptable to different kinds of situations. And I think we're seeing some companies that are taking the time to kind of research that and think about kind of the ethics of generative AI.
And then we're seeing some other companies that perhaps it's unclear as they unveil their AI, whether it's generative AI, whether it's kind of these if then types of models that were more historical. But I think that's where things are headed is how can generative AI play a role, particularly in reducing the resources needed for companies to launch these kinds of products and maintain these kinds of products.
Brady: Okay. And so when companies are making that switch over or considering going into the generative AI space, what kinds of things are they considering before they make that jump?
Eliane: Yeah, I mean, there's the what are they considering and what should they be considering? I think I can answer that at least from my perspective, what should they be considering? And less so what they're actually considering. I think there, was actually really surprised to find out that there are like data and compliance considerations. So if you're using generative AI, depending on what generative AI tool you're using and are you importing data to some external tool,
for your generative AI. And in that case, there's some real considerations about data and privacy that we're taking user data and we're feeding it into some other external platform. And I think that's something we don't hear enough about. And I hope that companies are really thinking about how to do that. Are they either figuring out a way to close the loop? Are they building some fully internal system? I'm not technical enough to know kind of what can be done there, but I think that that's an aspect that
We often, as non-compliance people, don't think about data and privacy ahead of the game enough. And it's later when we're kind of in hot water when we find out that we've done something. Now, the compliance people are thinking about this all the time, and we probably should listen to them. But I think that's one consideration that I was a little bit surprised by as kind of a non-technical person of. Like, we do actually need to think about what data are we uploading, and is that data secure, and do we have permission to use that data?
in that way. So I think that's one thing that I hope companies are really thinking about. The other piece of it, particularly when we think about whether we're using AI to make recommendations or decisions, right? So if you're kind of recommending what videos to show someone on YouTube, what products to buy on Amazon, or even what programs within a digital intervention they should do,
I think there's fairly low risk, but you kind of change the risk that you have as you start making more consequential decisions. So if you're talking about diagnosing someone, if you're talking about really using AI's clinical decision support software, where you're now making treatment recommendations, whether you're building a whole program for someone based on the AI, or if it's very public facing, so now this is an AI chatbot that's interacting with your users.
One of the things that I think companies really need to be thinking about is the potential biases that are built into these AI models. And I think, again, this is something that we tend to gloss over. And I think in other places, maybe it doesn't matter so much, but in healthcare and other kinds of consequential areas, we know there's already evidence to suggest that most of these AI models are trained on data that is predominantly white, predominantly educated, predominantly affluent.
Years ago, I went to a talk where they showed data how healthcare decisions, there was an AI algorithm that was kind of predicting risk and healthcare decisions that was biased against non-white patients because the data it was trained on was biased. So I think that's something we often fall into this trap of saying, it's AI, it's unbiased, it's fine. But then we look at the data it's trained on biased.
the people who are building it biased, whether we want to or not, we all have kind of that's where social psychology like really focuses is we have all these inherent biases. So I think, I hope people are being really mindful about how to think about implementing this and testing it in a way that really allows them to figure out ahead of time, are we potentially introducing biases that could be risky in certain situations? Are we misdiagnosing certain people?
Are we providing advice that really isn't good? Are we misinterpreting what they're saying because we don't understand the language that they're using? Or all of these kinds of things I think are really important when we're talking about real users that we also don't see face to face. So we don't have a way to intervene as easily as if this happens in kind of a face to face in-person model where we realize like, this didn't work and this person is now at risk.
Right? And so I think those are two big things that keep me up at night sometimes when I think about AI and digital mental health.
Brady: Yeah, so there certainly is a lot of risk, a lot of advantages and a lot of potential as well. And we can get into that in a second. I would imagine that even if companies are listening to the brightest and the most attentive and the most conscientious of advisors, that they're still going to get things wrong and make huge mistakes. Big companies are making huge mistakes right now.
It's in the news often. And so for a user or consumer or someone who's stepping into this world, which we're all in, but for someone that's, you know, using these digital interventions, these apps, these websites, whatever, what kind of advice and warnings or however you want to put it, would you give to that the person on the other end?
Eliane: Yeah, I mean, I think my consistent advice to consumers is always do some research. think I've been a big advocate for the fact that I think we're not doing enough to inform consumers about the apps that I consider to be kind of bad actors versus the ones that are trying to do things the right way. And you're right, like we're all going to make mistakes and missteps as we go along.
but I think there are companies that are investing in testing, that are investing in research, that have the right AI people to kind of check to see, you know, are overseeing things and fixing things along the way, that are asking the right questions. And then I think the bad actors are really kind of leveraging AI tools to the point where I've started to wonder if there are certain digital wellness apps that are frankly completely AI generated.
Like there's no clinician, the images are AI generated, the content is AI generated. And I'm wondering if we're getting to a point, I'm gonna sound like a conspiracy theorist, but like I'm wondering if we're getting to a point where someone can go and say, build me a digital wellness app and without having the expertise can't go and then evaluate that. So my advice to consumers is like the work isn't really being done for you.
So invest a little bit of time before you download an app, whether you're seeing it on social media platform or you found it in the app store, go look to see if they have a website. What does that website look like? If that website is just directing you to their app store, I would be really leery. Check to see if they have a science team. Do they have any publications? How do they talk about AI? If they're AI forward, they should have some narrative there. And yes, it's gonna be...
public facing and marketing language and probably not telling us exactly what's happening behind the scenes. But at the same time, there is a massive difference in the research that I've done between apps that I consider to be the bad actors that literally either have no website or it's just a website that says, you know, here's us go download our app. And there's no evidence of subject matter experts, no science behind it, no evidence behind it.
which then makes me very leery if these companies are using AI, that they're just not equipped, even if they're well-intentioned, they're not equipped to be asking the questions and overseeing things to mitigate risk.
Brady: Yeah, and so are there any other more egregious signs of bad actors? You've used that term a couple of times, but you mentioned the company or the app or whatever that maybe is well intentioned, but just kind of doesn't have what it should, doesn't have a clinical team or the research behind it.
If we go a little bit further, is there something that really concerns you the most? Like the biggest red flags?
Eliane: Yeah, I mean, I think if you're a well-intentioned company that doesn't have the resources to hire a clinical team or to invest in the research, it doesn't mean that you're bad and your app is bad. But to me, the sign of a bad actor is what claims are you making with that kind of information?
A couple months ago on LinkedIn, actually posted about a really egregious case of an app that was recommended to me on a social media platform that presented itself as a cortisol detox plan. you know, my brain was like, huh, that sounds really like an intense claim to be making. So let me go take their quiz. And this was really a scenario where it was very evident that all the images were AI driven, but they had this kind of
quiz that seemed to be on the face of it, very scientific. But because I have knowledge, I know these are not validated tools. The questions seemed scientific, but were not very targeted. They could be measuring lots of different things. And then they give me some like cortisol score at the end and how they can help me. And actually reading some comments on Reddit that suggested that you can take that quiz however you want and you always get high cortisol. I went back and I took it again.
and answered as if I was like totally hunky-dory, like I'm doing super well. And I still got moderate levels of cortisol that obviously need to be managed using their app. And as I kind of reflected on it, I started to wonder whether it's one of these cases where was that quiz actually AI generated? Because the only people I can find linked to that company are all marketing people. you know, is it, have we gotten to a point where someone can say, hey, there's money to be made in this wellness space?
And can I go in and just tell AI to build me a quiz to measure stress and cortisol? And it does that, which maybe isn't even a bad starting point, but like, let's validate that tool. Let's make sure it works because we know that AI, we know that chat GPT is imperfect and it's still making lots of mistakes, right? So that's probably the most egregious version that I've found, but I'm starting to worry that these are gonna spin up.
more and more. And because it's hard, we don't see these companies get repercussions very often. They're making pretty wild statements and claims that, again, consumers, I don't think are equipped to understand are not valid claims. And so they buy into it. And that, me, is where it's dangerous, right? Because we don't know what happens to their data. We don't know what happens in terms of their personal risk when they think
they're gonna get better, especially if it's a population that's high risk to begin with.
Brady: Right. So you said that on one hand you have kind of the maybe underfunded, researched, kind of the yellow there that you might get, the little warning signs. But then really the big concern is someone that's overselling, trying to use science or research terminology to blatantly just deceive people and kind of make them think that
The app is something that it's not.
Eliane: Yeah, 100%. It's to me, it's the deception. Like that's what makes you a bad actor. Are you deceiving? And there are certainly those apps out there. And those are the ones that I think are most problematic.
Brady: Yeah, yeah, that makes a lot of sense. So flipping it the other way, what are some current really good uses of AI that you've seen or that you just think are possible and available right now?
Eliane: Yeah, you know, I think going back to how AI chatbots were used historically, I think we've seen some really wonderful things come out of that work. And WoBot, think I've always upheld as a company that has invested in AI from the beginning in this fashion, but they've also invested in science. you you see, I have no affiliation to them, but you see them invest in the science and ask the ethical questions about generative AI. So I think that's an example of
where AI can be used really effectively, especially if we're careful about how we're expanding its use. So I think in a low risk population, having an AI chat bot act as some sort of companion, whether they're delivering the full intervention, whether they're kind of guiding someone through an intervention, we know that there was previous work kind of in the early days of digital mental health that showed that guided interventions did better than unguided interventions.
whether that guidance was coming from a peer, whether that guidance was coming from a therapist or a coach, didn't really matter. And to me, what that tells us is people can use the benefit of a little bit of accountability and a little handholding. If I download an app that I'm totally self-guided, if I decide not to use it, there's not much accountability anymore. So even just having like an AI chat bot allows you to have a little bit of accountability, makes it a little bit more personal.
allows them to adapt things so that it feels more conversational, it feels more personal. And I think that works really well. And we just have to ask ourselves, how does that change as we go into generative AI? What new risks are we introducing? What risks do we introduce if we start changing now to higher risk populations?
so I think the AI chatbots, I think, are a really good example. The other place that I think we probably haven't explored enough is, again, going to AI behind the scenes and how can we truly personalize? And I think we're already kind of building the AI algorithm. So I don't know that it's so much an AI issue, but that we haven't really figured out what are the key data points that we need to understand in order to personalize. So I'd love for us to be able to start to think about
what puts someone at risk of dropping off? And can we identify patterns? Can we use AI to identify changes in usage patterns or changes in wellbeing that we've then shown predict someone's stopping, dropping off, right? And is there then a way to reengage them? I think that is where one of the underexplored areas of AI where it would be behind the scenes, it would be relatively low risk.
but we know that engagement and retention are some of the biggest issues for digital products. So if we could use AI to really process the data and the patterns in a way that we aren't capable of right now, I think that's where it could become really a massive asset for us in digital health.
Brady: Yeah, yeah. And you talked about personalization, we talked about algorithms, and I think one of the knocks on algorithms and AI kind of in general is the lack of understanding of how it works, both for technical users and people who are really kind of conscious of this and in the know, but also, of course, for end users and people that maybe don't care, like they just have no idea how it works.
What do you think is a really good and appropriate balance of transparency for how an intervention works or how an algorithm works or really how the user is being helped?
Eliane: Yeah, I think, you know, I'm always a big fan of transparency, but at the same time, you have to balance the fact that one, a lot of people don't care and don't want to know, particularly users, right? Like they just, there's a population of users that just want to use it they don't really want to understand the science behind it. But then you have a population that really does want to understand it. They want to understand everything that's happening. So I think...
the nice balance is offering people choice. Give people the option of being able to dig in if they want, and then provide a basic level of transparency. I think when we think about certain things like how algorithms are being populated and making predictions or recommendations, for instance, I'm not sure that we need to be super forthcoming right at the beginning to say, this is what we're doing. But again,
once someone gets a recommendation, it might be interesting for certain users to say, how did we arrive at this decision and have some kind of, you know, summary or layman's way of explaining to them how we use this different kind of data to make these recommendations. I think if they can see that it might make them buy into it more than just this belief that, you plopped out this recommendation. And the reason why I say that is because I actually think that
Personalization is used as a buzzword. I've seen it happen a lot. We all say we personalize our apps. And I would bet that most companies are not personalizing nearly as much as they say that they do, that they collect this information under the guise of personalization, but they haven't really gotten to the point. It's aspirational at this point to really, truly personalize. The users can see through it a lot of the time. They can come up and they can say, well, this feels generic, right? So there is this, I think,
even if you have some fairly simple just decision tree or if then kind of algorithm, being somewhat transparent helps people understand what is being used for personalization, what is personalized and what's not. Because at this point where we haven't really hit the mark, I think users are becoming cynical that we're doing it at all. And that to me is the risk, then you lose people, right? Like, so kind of balancing that, but not leading with it, but...
using it for the folks who actually do want to dig into it, I think is good. I'm always a big fan of like, give them the option to dig in and learn more if they want to.
Brady: Yeah, yeah, that totally makes sense. And yeah, you don't want to, you know, have it shoved in your face all the time, but if someone wants to know, they should be able to ask and find out. thinking into the future, like, how do you think would be a really good ending state or a good vision of the future for how AI works in
digital health products and what would be like a really bad version of how things work.
Eliane: Yeah, so I guess really good what I would love for us to do at some point. And I do think that this will probably involve some form of generative AI. So I think we've got to figure out some of that stuff. I think one of the things that we see, there's a lot of talk in the digital health market about kind of point solutions and how no one wants point solutions.
And then there's a lot of talk about kind of integrating. So, you know, someone who has depression, but also has insomnia and has anxiety. And if you go see a therapist, and I'm not a clinical psychologist, but if you go see a therapist or you speak to a coach or you speak to a counselor, they have the ability to kind of bring all those things together and not just give you a strict program.
And I haven't seen it really happen organically in digital interventions yet, where we come up with kind of a bespoke program for someone that targets really the issues that they have, the symptoms that they have. We've seen transdiagnostic tools pop up, but again, they tend to be focused on things like depression and anxiety, but not really focusing on this holistic approach to understanding the symptoms that are negatively impacting someone's life.
be that physical symptoms and or mental health symptoms and developing something that can be integrated, truly integrated. So I think my aspiration for AI and digital health is really to be able to create these types of tools where the content is not AI created necessarily, the content's created by subject matter experts, it's been tested.
Well, we get to a point where we know the data points we need to collect. We can understand someone's kind of ethos and what they need and then take that content and create something that's bespoke to them while also tracking them to see how are they responding to this? Are we seeing signs of potential drop-off? Are we seeing signs of potential worsening? And using the AI to track that data and make sense of that data in a way that I think humans just aren't capable of.
but having human oversight. I think where I see it really going wrong, and I hope we don't wind up there, is where we really see that AI can replace humans. And I think right now we're seeing a lot of talk about AI being used for therapeutic purposes, and I'm not necessarily opposed to it. But again, I hope that we're being careful in thinking about high-risk populations and the fact that a human, whether that human is on telehealth or in person,
can respond in a different way potentially than an AI that isn't being overseen instantaneously. And I think there is something that we also have to understand that while there's an argument that people might self-disclose more to an AI chatbot because they don't feel judged, there's also some data suggesting that people are highly uncomfortable with AI presenting themselves as empathetic. Like AI cannot be empathetic.
they can appear empathetic, we can lead to the perception that they're empathetic, but unless AI can start to have feelings themselves, they'll never be empathetic. And there is kind of this fine line where I think if we try to make the AI too human, people respond poorly to that. It makes them uncomfortable. It's just like, it kind of doesn't work for them. So I think understanding where AI is valuable
and where human touch is important and where that line is, is something that I hope we keep in mind and that we don't go too far in the other direction where we go, look, we don't need therapists anymore. AI can deliver therapy effectively.
Brady: Yeah, and I think that kind of line is probably shifting all the time. And that also makes it tricky is when you hear Siri say something to you, then it's okay because you're like, it's Siri, it's fine. But once it's a little more human and a little more lifelike, it gets a little bit creepy. So very interesting to see where that shifts. Well.
Yeah, thank you so much for your time, Eliane. I think this has been a enlightening conversation. There's a lot to think about and for people to be aware of, especially as we continue into this world of AI. So thanks so much for coming on the show.
Eliane: Thanks, Brady.