Is AI Ready to Play a Role in Your Healthcare?
January 02, 2026
Subscribe: Apple Podcasts | YouTube Podcasts | Pandora | Spotify
More people are turning to AI for health advice – but is that a good thing? Our experts discuss how AI is changing the way people navigate their health, when it can be helpful, and when it can be risky. Learn why AI can’t replace doctors, and how to use it wisely for your health.
Macie Jepson
So, Matt, last night I went to AI because I needed some advice about my dog. We were giving him meds to calm him down after we cut his claws and the amount that we’d already given him was not working. It was late, the vet was closed, and I really needed some answers.
This is exactly the scenario that we’re going to talk about today. More and more people are going to AI out of frustration. They need quick medical advice, reassurance, maybe some answers. I mean, that was me last night to a tee.
Matt Eaves
Yeah, you’re right, and the data shows it. More and more people are turning to AI for health care information and diagnosis. Just some data: We’re seeing searches for things like AI symptom checker and AI doctor up over 100% in the last year. And AI medical diagnosis is up by 50%.
But then people were using those kinds of searches in Google anyways before AI was there. What we want to talk about today is, is this a bad thing or can AI help diagnose and better inform our overall health care journey?
Hi, I’m Matt Eaves.
Macie Jepson
And I’m Macie Jepson, and this is The Science of Health. Today we are asking, what is the price of AI being the new front door to care?
Matt Eaves
So, before we jump in, we should let our listeners know that you and I are in slightly different places when it comes to AI in healthcare. When we were developing the script, it felt like you were leaning more cautionary about this, where I am somewhat more optimistic about AI.
For work, I’m required to work in AI, so that’s probably where my bias comes from. And even more recently, I had just taken a Waymo in Phoenix for the first time. So, when it pulls up there’s nobody in the driver’s seat. And then when you get in it drives and the steering wheels moving and everything’s happening, but there is no other person in the car with you. And so, I’m perfectly comfortable with AI and I digress. Let’s get back to health care.
Macie Jepson
All right. Well, I’m a little on the fence. Last night I went to AI because it was convenient for me. It told me to get my dog straight to the vet because I failed to tell AI that he weighs 135 pounds. It said he was overmedicated and was probably in distress. So, what worries me is knowing the difference between good and bad advice. It’s not always black and white.
Today joining us are Dr. Jeffrey Janata, a psychologist and professor of psychiatry who specializes in medical psychology, and psychiatrist Dr. Patrick Runnels, chief medical officer of both Population Health and the Veale Initiative for Health Care Innovation, both at University Hospitals in Cleveland. Thank you for joining us for this conversation today.
Jeffrey Janata, PhD
Thanks for having us.
Patrick Runnels, MD, MBA
Thanks.
Macie Jepson
I think I laid out why, but I’d love to hear the why from your perspective. Americans seem frustrated with the medical system, the long wait times, rushed appointments, high costs – what are you seeing in your practice as well?
Jeffrey Janata, PhD
For us, access has been a big issue, particularly since the Covid epidemic when need for behavioral health care went way up and access times grew longer and longer. And that’s, I think, paralleled in other medical specialties as well.
Patrick Runnels, MD, MBA
Yeah. I don’t have a whole lot of patients coming to see me indicating that they’re using any kind of AI tools. And, we haven’t seen any reduction in the number of people making appointments, for instance, for psychiatry or therapy or counseling. So, we’re not getting that. That said, I think the data shows that people are using it.
What I am seeing a lot of, and this has been around for a while, is people going and doing their own research. And I think what’s happening more and more, and they may not even know it, is that research involves AI powered tools. So, whereas before you Google it, well, now Google involves an AI function when you do that.
And so, it’s allowing people to get a lot more information a lot more quickly. But I haven’t seen yet that it’s coming in and being introduced into the sessions we’re seeing. And we’re not seeing yet that it is making people not want to come get services from us.
Macie Jepson
On the flip side, I’d be curious for you to go back in time and let us know how your attitude has changed over the last five years.
Jeffrey Janata, PhD
So, I think that what surprises me is the acceleration, how quickly the field is evolving. Apparently, the stock market is being driven by AI, and it is infiltrating really every part of our society in ways that I think we wouldn’t have predicted five years ago.
And anytime something moves that fast, I think it’s worthy of distrust. It’s worthy of taking a real step back and say, wait a minute, where are we going? And how fast and what are the guardrails or what are the protections? What sorts of things have been built in to make sure that we’re not headed off in a direction where, like any tool, use the right way, it can be really terrific. Use the wrong way, it can be really dangerous, and we need to be careful about that.
Matt Eaves
I think that’s a really interesting point about AI. But, does the hype match the reality? There’s a lot of hype around this, and I think this is going somewhere, but if you think about your daily life, how much of it is actually changed in a meaningful way because of AI? Not a ton, right?
There are certainly some things, but day to day, what I’m doing today versus what I’m doing five years ago, work aside because I’m working in AI, in terms of how we’re impacting people’s lives, potential is there, but I don’t know that we’re quite there yet. Where the hype is outpacing the reality in my mind.
Patrick Runnels, MD, MBA
The original use cases for AI, when people were starting to think about this a lot, revolved around training AI on medical language and then I can go into journals and summarize the research of 50,000 articles in 10 seconds. That was kind of the initial thinking fairly rapidly, as far as I’m concerned.
I was a little surprised of this with the people programing AI products. They figured out that they could also train AI to actually respond to expression. AI could start to respond empathically. It could actually pick up on emotion and tone and voice. It could pick up on word choice and actually assess and understand whether or not you were happy, sad, angry, discontent.
Those are things that AI started to pick up on. And indeed, I’ve actually interacted with AI products that are able to determine that my voice got angrier or less angry, or that I look happy or I look sad. That’s a thing. And they’re out there and these things can do that.
So is that actually useful in the context of helping someone get better? And that does appear that AI is actually more likely to have an empathic response, an appropriate empathic response than humans. So, when AI monitored phone calls like customer service phone calls, it found that the AI agents did a much better job of picking up on emotional cues and responding to them empathically than the humans did. It was actually more attuned to that because that’s how it had been programed.
Jeffrey Janata, PhD
When we do our residency and fellowship training, we actively teach them to be empathic, to validate emotion, to respond in a human way. But in practice, what happens is we’re more likely to go right for the “here’s what we can do for you”, because we see something that can be done, and we jump to that too often. So yeah, that’s right.
Matt Eaves
Is there a danger in that in the in the tool being I don’t let’s say overly empathic, but if that is always the response, do you run the risk of the patient sort of eat whether consciously or unconsciously learning? If I come into the call or to the interaction sad, I it will immediately sort of match me to that.
And then and then I get this nice list of trying to bring me up. Right. And you see where I’m going, where it’s like, oh yeah, yeah. Where a human being, you may say, we’ve gone down this road and I’m not going to we’re not going to start at this level that you want. Like I maybe that’s not an appropriate clinical response, but I’m wondering like.
Patrick Runnels, MD, MBA
Jeff, I’ll take a quick start there.
Jeffrey Janata, PhD
Yeah. Yeah.
Patrick Runnels, MD, MBA
So, if I want to develop an AI therapist, and AI is really good at being empathic. Empathy is essentially the ability to pick up on, understand and put myself in the same shoes as someone else from an emotional experience standpoint – AI gets really good at that.
I, the person, love when someone gets me. AI gets really good at getting me and it becomes a feedback loop. That’s very satisfying without necessarily being very therapeutically good for me. I think that’s what you’re getting. And I will say it’s a thought I’ve had. Jeff, I don’t know what you have to say.
Jeffrey Janata, PhD
Well, yeah. I mean, you know, there’s studies that talk about the dopamine release and other feeling good kinds of things when we have those sorts of interactions. That can be a kind of a carrot that draws people in and is rewarding for them. I’m not sure there’s harm in that. I think there may be harm if we go for only empathy and don’t then use that empathic response and recognition of what someone’s experience is and what their emotional state is to then take it to the next level.
But is there data to suggest that people just call up to, you know, feel good? Yeah. I mean, what was the Joaquin Phoenix movie? Her where he just became enamored of this AI device and formed a relationship with it to the exclusion.
And this is the issue, to the exclusion of real-life interactions. After a while, if you’re getting everything you need and you’re getting it quickly, then you may not be as willing or motivated to go out and go through the sometimes-tougher job of making an actual human connection, and particularly in an era where people are lonely and isolated, more isolated than they ever have been.
There’s a risk that I think extends your point, that people will begin to replace the real value of all that goes with an honest human interaction – with a sympathetic voice that they enjoy spending time with.
Macie Jepson
You know, I thought about last night after I got that response from AI. I decided not to give my dog more medication. And it said, “Good, you’ve made a very good choice. I can tell that you care deeply for your dog. Is there anything I can do to help you calm him down in the meantime?”
And I thought, I really like this thing. And then I laughed because I knew better, and I walked away. To your point Doctor Janata, my concern is that some people are going to feed into that, and it’s going to become more of the first choice, not just out of desperation or because it’s after hours. But nope, I like this more, right? And that’s problematic.
Jeffrey Janata, PhD
Well, an AI is selfless. AI is not going to burden you with its problems. It’s only interested in yours. And that one-way interaction can also become seductive.
Patrick Runnels, MD, MBA
The worry I have isn’t the AI itself. It’s the people who own it. Am I going to go to the therapist who makes me work really tough and actually challenges me with stuff I want to avoid? Or, am I going to go to the therapist that isn’t always kind of veering towards that.
And so if making money on this requires me to use it, what you might see people do is start programing the AI to be more in line with what it’s going to take to get that person to come back and use this again. It’s not the AI technology, it’s the people behind it and their intentions.
This is a field of ethics that’s just burgeoning in health care. And, we actually have a council at UH that think about these things at a very rudimentary level, where we start thinking about AI and how it gets incorporated into the tools we use and the different interventions we have. All still very new, but those are the kinds of questions we have to ask is what is really best for the patient, not what feels the most good, but how do you then have that tug and pull between the people that are programing the AI and the people that are wanting to use it for the betterment of the patients they serve?
Jeffrey Janata, PhD
It’s a great point. I mean, the reality is that good psychotherapy is not designed to just make people feel better. We need to challenge and help people improve and move, and that’s often an uncomfortable process. AI isn’t programed to make people feel uncomfortable.
Matt Eaves
Great point is that you get worried about the affirmation, right? You know, it’s this constant affirmation. It’s like, this is perfect. It’s helping me. It’s like, well, you’re not really being challenged, and we’re not moving you along. And, you know, in terms of where you want to get to as a goal, you’re just maybe we’re just making you feel good.
So yeah, I didn’t think about that. But that is a huge risk because it goes to the bias to like I started out talking about, you know, when I got the script for Macy, it was more leaning into, hey, here are things that have gone wrong. Well, I immediately went to ChatGPT and I could easily, you know, and other AI tools and I could easily find all of the articles that match my bias.
Right. So I was able to confirm might as well. And and both are true. Right. But if if I’m only coming at it through one lens, then I’m sort of putting myself in the echo chamber. Right?
Patrick Runnels, MD, MBA
Right. The echo chamber problem. AI might massively exacerbate that and make you feel like that’s not what’s happening at all.
Matt Eaves
Right. Yeah.
Macie Jepson
So AI is only as good as the information that is fed to it. And Matt, this is kind of a question for you because you’ve got probably more experience in this than anybody in this room right now from what you do here at UH. So, what type of safety mechanisms are put in place to make sure that this doesn’t get into the wrong hands and that the wrong type of information doesn’t get out there?
Matt Eaves
Well, to Doctor Runnel’s point, and I’m probably not as familiar with the safety mechanisms, but I think when you talk about councils or people that are reviewing the AI and looking at it through a lens of, “I’m not a developer of this, I don’t have a stake in it, but I want to make sure it’s giving the right information.” I think that’s where you probably get the best result in terms of, is this operating in a set of inside guardrails that we think are safe and appropriate?
I think where we run into issues just from a technical standpoint of I go to ChatGPT or I go to Google or Gemini, and I get this response that is either incomplete, unsafe, or even has bad information. And the reason that is in fact Doctor Runnels hit on this in the beginning is it’s crawling the entire internet, so you’re not searching a behavioral health specific tool. And so, you are getting articles is part of it.
Patrick Runnels, MD, MBA
It doesn’t know what’s right or wrong.
Matt Eaves
It doesn’t know. Yeah, that’s exactly right. We think of AI as smart and it is in a sense. But really all it’s doing is you’re telling it, go consume all this information, summarize it and then give it back to me.
Whereas in an environment of some of the tools you’re talking about in behavioral health, the guardrails are we are only giving it medical textbooks, journals, and we are confirming that the only information it knows is from the same information that when you went to school, right, and learned through your fellowship and residency, that’s what it’s learning, as opposed to these articles that are written by folks that may or may not be qualified to opine on them now.
Patrick Runnels, MD, MBA
So we actually participated a little bit on the development of an AI generated tool that the idea was it would be able to pick up on whether a diagnosis of depression, it would be able to listen to you, have an interaction with you and be able to spit out and say, you’re depressed and the severity of a depression is this number, right?
AI’s quality was entirely contingent on the inputs we gave it. And the inputs had to be a combination of the right medical knowledge. And, you know, in psychological knowledge and patients with depression. And so it had to be able to kind of experiment and play with and see what depression looked like in a lot of different ways, what the variation was and what it wasn’t. We then had to give it information to say, that’s what depression looks like, that’s what non depression looks like, and so on and so forth. But it was an incredibly intensive process to get it right.
Jeffrey Janata, PhD
So that’s an example of good data. Where carefully controlled circumstances where we carefully measure depression and anxiety. And we use that as kind of the gold standard to teach the AI system what to look for in a range of depression, range of anxiety from 0 to 10 and so forth.
And so we’re very careful in research to make sure that we’re bringing to AI the kinds of learning that we can validate ourselves to make sure the system is learning what it should, and picking up on things in the way it should, and therefore is accurately predictive of depression and anxiety scores.
Patrick Runnels, MD, MBA
So you brought up Waymo, right? To zoom out a bit. And the depression example with this case we’re training a tool to do a very distinctive specific thing, figure out if someone’s got depression or not.
Waymo is actually not so different that. So the AI involved with Waymo about this idea is its job is to simply drive. So it is getting trained over and over and over again on how to navigate being a car in the city. That’s what AI is doing for it. And in that regard, all of the input being put into that, related to that task and the viability of the product was incredibly dependent on the degree to which that input is good.
And that reflects in the quality of Waymo. I just read an article not too long ago. Waymo’s traffic safety record is way better than humans. It’s remarkable. It’s also very narrow and very focused.
We’ve got a technology we actually have implemented here at UH that is designed to actually pick up on the risk of falls and patients who are in hospital rooms. It’s doing a very narrow thing. And then in all cases, there’s a human chuck in the background, and our preliminary data is that we reduced falls in a very small population by 78%, which is to say. That’s way better than what we were doing not with AI. So that’s really cool. But it’s also this very distinctive, very specific thing that we are having AI do that is very different than the generative AI or the large-scale AI that is attempting to be something akin to a superintelligence that is attempting to be almost sentient in that sense, so those get into very different spaces.
Matt Eaves
Do either of you use AI clinically?
Patrick Runnels, MD, MBA
We already have trials and clinicians who are already using AI to listen to conversations and summarize those conversations. That’s what we call ambient listening. They’re clinicians here that are trialing some of that stuff in a sanctioned way that’s safe and see what’s it like to have a note written for me? And the good news on that is it looks like it saves a lot of time. However, it’s not producing anything that the clinician is not checking or anything.
So, in that regard on this use case, I’m not using it. I actually have tried to go in a few times to help write some stuff and I have not found a way to tap into it in a way that makes sense.
Macie Jepson
When you talk about technology development, could there one day be an AI product that people know that they’re getting legitimate, medically backed information?
Patrick Runnels, MD, MBA
There are companies developing things that do just that. They are able to give medical advice or, you know, on any range of things. And the question is, what’s the level of comfort for me, the actual provider who might be responsible for that patient, and what’s that advice look like when you get into that kind of thing? It has to be prepared to get information that is potentially all over the board.
So, the question is how big, how grand do you make the AI capabilities? The grander you get, the more likely it is to make an error. So, if you want to come and get a medical advice, even on something like, “hey, I’m taking my medication in the morning is there any problem if I switch to the evening?”
That seems like a really simple question, and I would love to have AI be able to answer that for you. The question is, is how do you know the question you’re asking is absolutely correct for that patient. That nuance is really tough. AI has to have a lot of information and a lot of practice. We then have to run a lot of tests to see if we can break it.
Like, what if I asked this way, what is the answer? What is the answer it gives. If I’m taking Prozac, will it answer the same way as if I come in and say I’m taking Zoloft, right? Does that trick AI at all?
Jeffrey Janata, PhD
Will it even answer the same question the same way twice? Yeah, that’s the reliability question. If I ask it that question today and I ask it again tomorrow, is it going to say yes or no with the same frequency as it did the day before there.
You bring up some great points and I think that the problems are ultimately personalization. How can I possibly know all the different variables that may actually be really important to know? Frankly, the, the other side of that risk is that people input personal health information into a system that’s probably as wide open as Facebook. They don’t recognize that they’re turning over the information that we go to huge lengths to protect. And yet people are jumping onto AI and telling them all about themselves in ways that have ramifications. They may not like it.
Matt Eaves
Yeah, that’s a great point. You’re right. Those open systems, they are not protected. You hope that Google and OpenAI or ChatGPT are doing the right thing, but there’s nothing requiring them to put protections in place, unlike health care organizations. We operate behind a firewall and we can’t, you know, leave the file open.
All those sorts of things. These companies, because they’re not health care institutions, are not bound to the same restrictions. As we are.
Jeffrey Janata, PhD
So, then the dilemma is how much information do I give it, right? Without revealing too much that I wouldn’t want somebody else to know? And then how much does that influence the information I get back from AI.
Patrick Runnels, MD, MBA
Yeah, yeah. What I’ll tell you is there will be a solution, I think, in the next several years that allows for rudimentary versions of what you describe to happen. I think you will get to the point where you can ask some basic questions, and there will be some sanction of that. And the question is, is how far will it go?
And then the other question is, is how much of that is about facts, and how much of that is about the human interaction and the kind of more gray zones and the more subjective and qualitative aspects of what we’re going to get.
Jeffrey Janata, PhD
Yeah. Particularly when we know that so much of medical and psychiatric interactions are based on a trusting relationship, on a real relationship with a real person. And makes interpretations and suggestions and diagnoses based on that huge fund of knowledge that I can’t possibly.
Macie Jepson
We’ve talked a little bit about the future. Where do you see us being with AI in the next three, five, ten years? What does the future look like?
Jeffrey Janata, PhD
There’s no question this is an upward function. This is being used everywhere. I was in San Francisco last week and there’s Waymo, just back to that example. I don’t think there’s any question it’s going to be a part of almost everything we do. And our task is to make sure that we keep the guardrails on it, that we make sure that people understand its limitations.
And I think that, those limitations aren’t spelled out enough.
Patrick Runnels, MD, MBA
I kind of categorize this question into different things. I think that the near-term promise of AI is its ability to either do things humans can’t do efficiently at all, or to do things that are unpleasant for humans to do. That’s not to denigrate a job, but like scribing as something that is much less efficient having humans do than to have AI.
Summarizing the data in 50,000 word cancer papers is an insane amount of work for a human to do that AI could do very quickly. Or, that thing that I talked about on picking up on patient falls and actually being able to pick up on cues so all of the human workers can pay attention to the things that matter most.
Those kinds of things are, I see, as being where AI can quickly come in and do really great work and actually create a lot of efficiency and allow us to connect more as humans. I think there’s going to be a temptation for companies to look at AI as a replacement for humans, and I think that will happen some.
And I think what people will learn is that that’s not so simple, especially in the near-term. I believe deeply in the research I know. And what I know about human behavior is that our brains are very accustomed to actually being connected to real other humans. I think for the most part, that it’s not going to work out like people thought that.
It turns out the complicated mess of the world and the need for human interaction is a key part of what we do is, is going to eventually be a lesson we learned as being really vital in that I don’t ever see AI replacing, but I do think people are going to try at least a little.
Jeffrey Janata, PhD
Yeah, it makes the important distinction between AI as a tool in an interpretive sense and AI as the practitioner in a generative sense. And I think that we’ve got to keep that line very clear. It’s a good tool, but overreliance is a problem.
Matt Eaves
Well, on that note, we’re using AI more and more, is this reliance on AI, or even the phone book in your phone, does that lessen our mental capacity or ability?
You know, if we keep using AI for everything, are we not challenging our brain? AS a clinician, does that worry you?
Jeffrey Janata, PhD
Absolutely. So yeah, I think it’s a huge issue. And that’s why I think as a tool clearly delineated from the real processing that cognitively and emotionally we all need to do. And we’ve got to keep that line of demarcation pretty clear.
Patrick Runnels, MD, MBA
There’s a great story on a podcast that I heard about, that actually delved into this. And it’s about attention and what we’re good at. In the 1700s, Nathaniel Hawthorne wrote an essay, about a new invention that was going to be the bane of all existence and create lemmings who were thoughtless and pointless.
And that invention was the inside stove, because it was going to prevent people from gathering around fires. It was going to lead to social ruin. And so, I say that to say, one of the things we know is that the same thing happened with the transition from the oral tradition to written words. Right? We lost something when we transition from people speaking and memorizing stories and to hearing stories to reading stories.
There’s actually data to show, and there’s research to show that when that happens, that I’m less likely to be critical of what I read than what I hear. And so maybe that’s a problem. And that’s something we should think about.
All new technologies that interact with our brain have the potential to do that. But then the question is is that bad or is that evolution? Maybe those are things that it’s good to lose. Maybe, for instance, we don’t need to be as good at writing as we used to be.
Though, is this other current where people are diving in and using AI so much that they’re completely losing their ability to critically think about anything. So, there’s some emerging studies to suggest that we’re already seeing a massive loss of basic skills in thinking, how to problem solve, as a result of AI doing all the work for us.
There’s a movie called Idiocracy.
Matt Eaves
That movie is getting a lot of attention.
Patrick Runnels, MD, MBA
And it’s a movie about the future in which we’re all idiots because technology came in and humans stopped doing anything. I think we’re going to have to figure that out. And again, I think we’re probably gonna overshoot and create some trouble.
We’re going to see that trouble really quickly in schools and I’m more hopeful about the fact that we’re going to make that mistake and we’re going to mess up a little bit, but we’ll figure it out in the long term.
Matt Eaves
I think about that too. And if you told somebody 100 years ago that today there would be a device in everyone’s pocket that contained all the world’s information, I think you would naturally go to, “well and now everyone must be a doctor and a rocket scientist.” No, no, no, no, not even close. We don’t necessarily take advantage of the tools that are available to us.
Macie Jepson
In every circle that I’m in right now, there’s been talk for quite a while about the things coming and I don’t know what to think. It’s exploded and it’s here. Someone on our team just said, “this new teammate is now off of probationary period and is now on the team.”
It’s here. So what would your advice be to your patients to use it carefully?
Jeffrey Janata, PhD
Well, just taking off of Matt’s point to continue to think critically, to think independently, to reach your own conclusions. That would be number one. I think the second would be never rely on a single source. We know that when we get data from multiple sources, converging information, we’re much more likely to reach a decent conclusion.
So don’t use just one source and check it out with a human.
Matt Eaves
Is a fair take away from a mental health and behavioral health standpoint. And when you’re talking about no single source of truth, that doesn’t mean just use two AI tools. In the beginning of our conversation, we talked about challenging patients and not necessarily just affirming, and having that human being as part of your behavioral mental health journey, is important.
Is that that’s their sort of fair takeaway?
Patrick Runnels, MD, MBA
Yeah, I think it’s true of all health care. There’s going to be times when little things are manageable without. But, the bigger the issue is, the more complicated your problem is. The more vital that human connection is going to be.
Macie Jepson
Thanks so much for being here. Doctors Jeffrey Janata and Patrick Runnels of University Hospitals Cleveland. We appreciate it.
Matt Eaves
Great discussion.