Podcast – Advancing AI in Kidney Care

Based on his experience leading a team of data science and machine learning engineers, Alex Ruterbories discusses how AI tools are helping to personalize care and predict risks to improve patient outcomes.

calendar_month
January 9, 2025
schedule
19 minutes

In this episode of Kidney Health Connections, Dr. George Hart interviews Alex Ruterbories, director of data science at Interwell Health, on the role of artificial intelligence (AI) in enhancing kidney care. They discuss how machine learning and predictive analytics are helping personalize treatment by predicting patient risks, such as hospitalization or disease progression. Ruterbories elaborates on various types of AI, explaining how these tools can improve decision-making by synthesizing vast amounts of patient data into actionable insights to help clinicians provide better care while reducing burnout from excessive documentation.

The conversation also touches on key challenges in AI, such as bias and the importance of human oversight to prevent errors, particularly in generative AI, where systems may "hallucinate" or produce inaccurate information. Ruterbories emphasizes the need for transparency and continuous monitoring to ensure trust in AI models, as well as the importance of collaboration between clinicians and AI developers to ensure that AI tools are practical and serve to augment, not replace, human expertise.

 

Transcript:

Welcome to Kidney Health Connections, a podcast exploring the future of kidney health and the rapid shift to value-based care, where you can learn about the latest innovations that are helping patients live healthier, more fulfilling lives. Here's your host, Dr. George Hart.

Dr. George Hart: Hello everyone. Well, we couldn't embark on creating a podcast about healthcare without planning at least one episode on artificial intelligence. Like many others in the healthcare space, our company, Interwell Health, has been using machine learning for years now to enhance patient risk stratification and improve outcomes. Today we're joined by Alex Ruterbories, our director of [data] science and machine learning.

Alex has spent the last year leading a team of data science and machine learning engineers to develop innovative machine learning modules here at Interwell.

Alex, welcome. It's nice to have you here today. And I'm excited for our conversation.

Alex Ruterbories: Excited to be here, Dr. Hart. Thank you.

Dr. George Hart: Alex. I can't think of anything more exhilarating but terrifying at the same time than this whole conversation around AI and its implications. Yet I know AI is a broad-based description of a lot of different functionalities. Can you take a second and maybe just educate everybody on really all the different types of AI and what it means?

Alex Ruterbories: Yeah, definitely. We can hit a few highlights. Like you said, AI is extremely broad. There are systems like Deep Blue, which is a reactive system, Garry Kasparov chess-based, and played chess against Garry Kasparov in the 1990s. What most people think of when they think of AI is machine learning-driven AI, these adaptive systems that end up learning based on input and get smarter over time. Some examples of that in healthcare are supervised learning, predicting risk, as you mentioned. What is the risk of hospitalization? What is the risk of an adverse event? Very common in recent years.

Additionally, there's things like robotics which uses reinforcement learning and rewarding these robots if they perform good actions. Beyond that, there are language-based machines that process, natural language processing is what they're called, and they extract insights from speech, from text, so that humans can dissect that information and use it for other purposes.

Another field in AI is deep learning, or neural networks. When you think of medical imaging and detecting diseases and tumors with CT scans, there's a whole slew of machine learning algorithms that fall under the AI umbrella that are pretty commonplace in practice these days.

I think where you're probably hearing the hype go these days is with generative AI, which is not new. It's definitely been popularized with the advent of ChatGPT, but it's been around since, in modern days, since 2014 approximately, with generative adversarial GANs is what they're called.

So we have all of these different tools at our disposal that we can use to impact patient care or to improve our operations, the efficiency to be able to see more patients and make better data-driven decisions about these patients.


Dr. George Hart: So the idea would be AI can make us better, make us more consistent, make us more objective, and help us have better predictive characteristics for a patient population. Is that how me as a clinician should think about it?

Alex Ruterbories: I think the idea, as we see it, is to empower you to make those decisions without having to dig for the data, dig for the important elements of a patient, and be able to have that at your fingertips so that you can do what you do best as a human, which is the reasoning part of it. These systems aren't, they have a lot of knowledge, but they aren't intelligent, the same way that you are.

Dr. George Hart: Well, thank you for that compliment.

You know, again, my own experience as a clinician was for 30 years, the rate limiting step was the speed at which information came at me. Now I'm the rate limiting step, or a clinician's the rate limiting step, because the amount of information that comes is, you know, at a pace that far exceeds our ability to digest it. How does all that you're talking about play into the life of an individual clinician as he's working through his day? Give me some cases where you see that being helpful.

Alex Ruterbories: Yeah, absolutely. Like you said, we're generating information at an unparalleled speed right now, whether it's traditionally claims data, but we also, we're interacting with these patients as our entire care team is all the time. And so we have conversations with these patients — that's data that can be used to inform treatment. And we have labs, we have pharmacy data, we have the imaging like I mentioned.

And all of this, synthesizing this information, is really, the direction that AI has the ability to really impact, is: How do you get all this information, but summarize it in a way that it's digestible for whoever is talking to that patient at that point in time to be able to have the right conversation, to get to the right outcome.

Dr. George Hart: So you talked about machine learning and, you know, predictive modeling. What are some of the variables that you need to enter into that equation to spit out, if you will, an appropriate prediction for patients, whether it be at the population or the individual level. What are some of the factors that are important?

Alex Ruterbories: We'll talk about the individual level, because that's how our models work in this instance. But we're definitely looking at all of our claims data, all of our clinical data, labs data—what are the markers, pharmacy data. But more importantly than that, or in addition to that, I should say, we're looking at the events, we're looking at what interactions that we had, when have we had them—think of like an annual wellness visit—to try and detect early warning signs that things could go off the rails.

Dr. George Hart: How accurate can we get for, say, predicting the hospitalization risk for patients?

Alex Ruterbories: Yeah, think were over, I mean, it depends on the time horizon that you're trying to predict out to. If you're trying to predict rehospitalization in the next month, you can be 80% accurate. If you're trying to predict hospitalization over the next 90 days, for example, we might be 85% accurate. It really depends on the model, the use case, what the target is, and what kind of data that we can feed into those models.

Dr. George Hart: Reliable?

Alex Ruterbories: Very. And explainable, too. That's a cornerstone of this, is making sure that they're explainable and transparent.

Dr. George Hart: Yeah. Does that same sort of accuracy and predictability translate into predicting the progression of CKD, which is another factor that we're now being, you know, I think the market's heading in that direction.

Alex Ruterbories: Yeah, no. Same types of data are really important in predicting, you know, the healthcare resource utilization is critical in predicting whether or not progression is going to happen. I'm sure you as a clinician understand exactly why that is. Additionally, adherence to medication is really important there. Who are they talking to, when are they talking to them, how frequently are they talking to them? All of that plays into disease progression.

Dr. George Hart: You know, Alex, as we're thinking through and we're talking through all of these issues with AI, I can't help but think about my inability to dismiss bias as I looked at things. And, you know, it's kind of the human condition. AI gives us an advantage in that regard. Can you kind of elaborate on this concept of bias?

Alex Ruterbories: Yeah. Bias is critical for these models, whether they're, you know, tabular models that are predicting risk or generative models. Detecting bias early on is very doable. It's not easy. But for example, in your risk stratification models, what we'll end up doing is making sure that the data is representative of the population, that it's very broad and deep so that the models have the ability to learn the nuance that would typically be masked by bias or lead to undetected bias. So that's a really important step.

When we talk about the generative AI elements, a good example is these models have a tendency to hallucinate and make information up. There are techniques that are in play that allow us to mitigate that where they're forced to look at and almost fact check themselves in real time for the answers that they're putting out there so that they don't hallucinate.

I think at the end of the day, with the generative models, it's really important, at least at this stage, that there is a human in the loop, that a human is reviewing that output before interacting with the patient, before passing it on.

And underpinning both of those, regardless of if it's generative or more traditional supervised learning, the risk stratification models, is that there's in the background there's continuous monitoring making sure that the subpopulations are being treated in an equitable way, that these models are not injecting their own biases. And if they are, having control of these models in-house becomes really important because we can not only suss out where those biases occur, but retrain the models and point them in the right direction to remove those biases.

Dr. George Hart: You mentioned hallucinating. Can you give examples of where that might fit in and how we as humans that are monitoring this should put in the checks and balances so that we don't see that have a significant impact?

Alex Ruterbories: Yeah. A good example of hallucination is you can trick these generative AI models, and they're getting better about this, but you can trick them to give you wrong answers, whether it's two plus two equals five or something more complex. And if we have the ability to trick them to do that, if they don't have the right information, they'll do it on their own as well.

And so that's again where the human in the loop really becomes important. A good example would be, say we generate a bunch of recipes, a meal plan for a dietitian to review. That dietitian has the ability to then take that generative output and decide to accept, reject, tweak, or customize it to what they know about the patient, whether that's allergies, whether that's food preferences, whether that's the macronutrients, or otherwise. That human in the loop becomes really important to make sure that we're getting the right information to the patient at the right time.

Dr. George Hart: Well, it's good to know that we're not expendable just yet as humans. One of the areas that I'm attracted to with AI is this ambient listening part of it where we're doing an encounter with a patient. Whether it's me as a clinician, it's a nurse as a clinician, you know we spend a lot of time in documentation. And this generative AI ambient listening seems to be the formula to make us way more efficient than we are today. Am I thinking about it correctly?

Alex Ruterbories: Yeah, I think that's one of the most powerful things that is pretty achievable at this point with generative AI is we spend thousands of hours talking to patients and the documentation is a huge burden. It's a contributor to burnout as well. Also, capturing all the little details is pretty challenging and the accuracy begins to wane over time.

With the ambient node generation, or ambient listening, you're able to do the speech to text, get a full transcript, and store that full transcript for not only summarizing for clinical documentation, but for analytic purposes as well. You can begin to, for example, analyze sentiment or intent or what are the action items.

This is really important, when we're talking about continuity of care, is being able to, if I have a conversation with a patient for an hour, being able to get that to you in a digestible way so that you're up to speed immediately and can, in a frictionless way, pick up right where I left off.

Dr. George Hart: No, to me, that's exciting because one of the burdens in healthcare, and you mentioned this burnout phenomenon, is the burden of documentation that goes on within a healthcare record system. So, you know, any ability to make the life of the nephrologist, or frankly any clinician, easier is a huge win. And one of the things that has been my goal since coming to Interwell which is to figure out ways to get the clinician back to the bedside and get rid of some of this distraction that goes on, so I think that's really kind of a neat way that this can all be applied.

You work with a lot of different healthcare operations and care teams. You know, how are you working with them to ensure that these analytics are being used correctly and being used in a broad-based fashion?

Alex Ruterbories: I think at its heart this is about, this is about, AI needs to be actionable for it to provide value. You can build a lot of cool things, but if they're not actionable, then they haven't provided value, they haven't changed the life of a clinician or a patient or anyone who's really interacting in the system.

And so to make it actionable, it really takes a village. You need not only the team that's developing the AI solution, because that will be done in a vacuum and it won't provide value, but you need clinicians to really lead with their experience. They know what it's like to be on the front lines talking to these patients, what the patient really cares about, what's important to the patient at the moment, the context of where we are in the patient journey and what would be actionable, what data is needed, whether it's a prediction, whether it's more descriptive of what is their GFR, could be any of the above, but that partnership is really, really critical to developing actionable AI.

Additionally, the operations team, who is working with these patients day in and day out and talking to them, needs to have a say and a voice in terms of how does this integrate into the platforms that they have in a way that makes it frictionless for them to interact with that data, to digest it, and integrate it into their conversation with the patients.

Dr. George Hart: I mean, what I hear from you, really, is, you know, how do we harness this powerful tool and have it work for us instead of us being chained to it in a way where the AI is driving things? Is that kind of a fair characterization?

Alex Ruterbories: Yeah, absolutely. We want to mold it to make it fit-to-purpose for what our desires are, what our needs are. And a big reason for that is innovation is wonderful, but there needs to be a measure of practicality to it, that only through our collaboration, through our partnership, can we decide what is practical and deliver that.

Dr. George Hart: That sounds really great. And I love this idea that you bring forth of marrying together technology with the clinicians so that we don't lose sight of what goes on at the bedside. So I think there's great value there. Doesn't sound like this is a one-size-fits-all model, though. How do you, you know, work that into the work you're doing today at Interwell?

Alex Ruterbories: Whether it's our risk stratification, that's the example I'll talk about today, but even with generative AI, at the end of the day, there are out-of-the-box solutions that you can apply.

But in the case of risk stratification, they typically are built on narrow datasets. There are millions and millions of patients, but they don't take into account many factors of that patient, the context of why the patient is where they're at, and that makes them really hard to act on, unfortunately. Not everything is impactable; a patient's age isn't something you can change. And so using those out-of-the-box solutions, a lot of times you get to a risk that it doesn't tell you why the risk is what it is. It basically dumps work at your foot to then go and do the investigation yourself.

When we're building these custom models, we take into account many more elements of a patient's journey and that becomes a story that we can expose to you along with the prediction that then becomes actionable. And that's that explainability piece. Another thing that we find when we do that, because they're custom in house, is that because they're so transparent, there's more trust in them, which makes them more actionable as well.

Dr. George Hart: That's a really great answer. But I think the other thing I did here though is generative AI is not going to turn me from a 65-year-old gentleman back into a 56-year-old.

Alex Ruterbories: Correct.

Dr. George Hart: Can't do that.

Alex Ruterbories: Unfortunately. Not yet.

Dr. George Hart: Okay. But maybe down the road?

Alex Ruterbories: Maybe, someday.

Dr. George Hart: Okay, good, good.  

So what do we need to be worried about, with AI? You talked a little bit about hallucinating and you kind of put some bumpers around that and guardrails. Are there any other pitfalls that we need to think about and be aware of?

Alex Ruterbories: I think health equity is the really big one. Biases is really big. Lack of trust, lack of adoption, lack of desire to use these. The fear that they're going to replace your job, which is not true. They're here to augment human intelligence, not to replace it. I think those are probably the biggest things we should worry about.

Dr. George Hart: Last question. For this audience, per you, where's this going? What's the future? What's your crystal ball tell you? 

Alex Ruterbories: The crystal ball is always going to be a little hazy. There's going to be some diversions, but there are some foundational pieces that we really believe are true. First and foremost, that it's here to stay. These are going to be deeply integrated inside of workflows. And through that deep integration, we're really going to do AI clinical decision support, having all of the right data available so that the right decision can be made quicker, more effectively, more accurately. A critical component to that, again, is these aren't here to replace human intelligence, they're here to augment it; put the right data in your hands so that you can make the right decision for the patient.

What's really important, or what's really cool about that, in this day and age, is through AI we're able to synthesize just much wider swaths of information in multimodal ways. Text can be brought together with your images, with your claims data, with your tabular data, and all of that can be surfaced and summarized in a way that makes your job much more frictionless and can empower patients to also take control of their journey.

Dr. George Hart: That's great. Thank you so much. Alex, it's been a pleasure. Thanks for joining us today in conversation. I know I've learned a lot; I'm sure that our listeners have as well. There's a lot more that we could explore in the world of AI, and I'd like to have you back on the show sometime where we can explain and explore more of that.

For all of our listeners, thanks for tuning in. You can find more of our episodes of Kidney Health Connections on the listening app of your choice and at our website at interwellhealth.com. Thanks again. 

mail

Media contact

Corporate Communications
media@interwellhealth.com

help

Have a question?

For all other non-media inquiries, or more information on how to get in touch with us, please visit our contact page.