As an associate professor at University of California, Berkeley, Dr. Ziad Obermeyer has made waves throughout the healthcare informatics industry with his work on machine learning, public policy and computational medicine.
In recent years, the subject of identifying and confronting bias in machine learning has continued to emerge in healthcare spaces.
Obermeyer, who will present at the HIMSS Machine Learning for AI and Healthcare event next week – alongside Michigan State University Assistant Professor Mohammad Ghassemi, Virginia Commonwealth University Assistant Professor Shannon Harris and HIMSS Outside Counsel Karen Silverman – sat down with Healthcare IT News to discuss how stakeholders can take bias into consideration when developing algorithms and why he feels optimistic about artificial intelligence.
Q. Could you tell me a bit about your background when it comes to studying bias in machine learning?
A. I came to this work, in many ways, from a place of great optimism about what artificial intelligence can and will do for medicine. So a lot of the work that led to the work on bias was actually trying to build algorithms that work well and generally do what we want them to do, and not reinforce structural inequalities and racism. You know, I still actually have a lot of that optimism.
But I think we need to be so careful along the way toward that vision of an artificial intelligence that helps doctors and other decision-makers of health do their jobs better and serve the people they need to serve.
That’s kind of the overriding message that I try to stick to in my work: This is really going to transform medicine and healthcare for the better, as long as we are so careful and aware of all of the places that it can go wrong.
Q. And how can stakeholders and developers – and also providers – be careful in that way? What should they be taking into consideration when they’re relying on artificial intelligence to treat patients?
A. We got a lot of publicity for some of our work on bias. And what we tried to do is turn that publicity into collaborations with a lot of organizations in health, whether they were insurers, or healthcare systems, or even technology companies.
We learned some lessons from that very applied work that I think are really important for everyone who is working in this area to keep in mind.
Maybe it sounds a little trite, but the most important thing is to know what you actually want the algorithm to be doing. What is the decision that we’re trying to improve? Who is making that decision? What is the information that the algorithm should be providing to that person to help her make a decision better?
Even though it sounds so obvious, that is often missing from the way that we build algorithms. It often starts from, “Oh, I have this data, what can I do with it?” or these putting the cart before the horse situations.
I think that’s really the first and most important place to start – to really try to articulate exactly what we want the algorithm to be doing and then hold it accountable for that.
That’s where we started when we did our initial work, which was: OK, we want all of these population health management algorithms to be helping us understand who’s sick. That’s what we want to be doing. But what are the algorithms actually doing? Well, they’re predicting who’s going to cost money.
And even though those two things are related, they’re actually quite different, especially for non-white people, and poor people, and rural people, and anyone who lacks access to or is treated differently by the healthcare system.
I think that [question of algorithmic purpose] is easy to say, but it’s much harder to do because it requires you to really understand the context in which algorithms are operating, understand where the data comes from, understand how structural biases can work their way into the data and then work around them.
One of the really important things that I learned from this work is that, even though we’ve found bias now way beyond that initial algorithm – almost everywhere we’ve looked in the healthcare system, through these partnerships – we’ve also found that bias can be fixed if we are aware of it, and we work around it when building algorithms.
When we do that, we turn algorithms from tools that reinforce all of these ugly things about our healthcare system into tools that are just and equitable and do what we want them to do, which is help sick people.
Q. One thing I’ve been wondering about is bias in application. Even if an algorithm were set up to be as neutral as possible, are there implementations that could be using it in biased ways? How could organizations guard against that?
A. Let’s imagine that you were a profit-maximizing insurance company. It’s still not the case that you would build an algorithm that predicts total costs because total costs are not avoidable costs.
And if you start thinking carefully about what avoidable costs are and where they come from, in our healthcare system, even those kinds of costs are going to be concentrated in the most disadvantaged people, because who doesn’t go to their primary care doctor because they can’t get the day off of work? Or because they can’t afford the copay? Who are the people who had a heart attack hospitalization that could have been prevented, and had the person taken aspirin? [What about] the diabetic foot amputation that could have been prevented, and had the person checked their glucose and been taking insulin?
Even for a purely profit-maximizing insurer or health system, those are [interventions] you really need to get to disadvantaged people and prevent these expensive problems before they happen.
Health is special, because how do we use algorithms? Well, we can use algorithms to target sick people, and give them extra help and resources. Who do you want to find? It’s the most needy people who are going to get sick, and those people are the most disadvantaged people in our healthcare system.
Q. You mentioned at the beginning of this conversation that you’re feeling optimistic. What makes you feel hopeful about this field?
A. Through a lot of these collaborations with insurers or health systems, we’ve seen a lot of really great use cases of algorithms. I think algorithms can do good basically wherever human decision-making falls short.
If you’ve looked at the health system, you’ve no doubt seen at least one or two cases where humans don’t make the best decisions. I trained as a doctor; I still practice emergency medicine. And decision-making is just really hard in healthcare. It’s a complicated sector, with a lot of really hard things that humans have to do – complex data to process, whether it’s clinically or in population health or in insurance.
Anywhere that humans are faced with this super complicated set of data, and decisions that need to be grounded in those data, I think algorithms have a huge potential to help. We have this paper that shows that algorithms can really help a lot when we’re trying to figure out who to test in the ER for a heart attack.
There are lots of other population health management settings where algorithms can really help predict who’s going to get sick, rather than who just costs a lot of money.
So there are lots of cases where I think algorithms are really, really important, and they’re going to do a lot of good. That’s point one.
Point two is that we have to be really careful when we’re building those algorithms because very subtle-seeming technical choices can get you into a lot of trouble.
They can get you into a lot of trouble by doing harm to the people that you’re supposed to protect, but they can also get you into a lot of trouble with regulatory agencies and state law enforcement officials. It has not been a very good defense for organizations to say, “Oh, well, we don’t even have race in our algorithms or in our datasets, so we couldn’t be doing anything.” Ignorance is a very bad look in this area. That might be the most concrete message.
We’ve published this algorithmic bias playbook, meant for an audience of people exactly like forum attendees. It’s a step-by-step guide to thinking about how to deal with bias in algorithms that you’re using or thinking about using.
Starting to think about that organizationally, having someone responsible for strategic oversight of algorithms in your organization, having ways to quantify performance and bias in general – those things are really important for your mission and your strategic priorities. Algorithms are very powerful tools to help you achieve your goals, but also for staying on the right side of the law.
This interview has been condensed and lightly edited for clarity.
Obermeyer’s virtual panel with Ghassemi, Harris and Silverman, “AI Models, Bias and Inequity” is scheduled for 3 p.m. ET on Tuesday, Dec. 14.
Original Source: healthcareitnews.com
PatientBond, Vizient Team up for Digital Behavior Change Tools
Patient engagement SaaS provider PatientBond and healthcare performance improvement and analytics company Vizient are partnering up to provide Vizient member healthcare organizations with digital patient engagement and behavior change programs.
WHY IT MATTERSPatientBond’s digital engagement workflows can be personalized with psychographic insights, with the aim of activating patient behaviors and driving improved patient engagement and outcomes.
Through the partnership, Vizient’s customer base, which includes academic medical centers, pediatric facilities, and community hospitals, will offer programming including care gap closures, condition specific messaging, screenings and appointment reminders and appropriate use communications.
The aim of the programs is to reduce hospital readmissions and improve digital health risk assessments.
Other programs included in the deal will provide psychographically segmented marketing campaigns to advance patient/member activation, as well as patient and physician matching or find a doctor services based on psychographic insights.
The deal will also provide extensive market research insights and dynamic payment reminders for partners.
THE LARGER TRENDPatient-reported outcomes are a critical way to assess the ongoing state of patient health and satisfaction, and a growing number of digital tools are helping them do so.
The financial upside for care providers is also noteworthy: Jackson Hospital significantly improved its finances with digital patient engagement tools, switching from letters and phone calls to automated emails and text messages along with some help from analytics.
At Rush University Medical Center, the hospital has deployed similar digital tools to reduce the strain of avoidable readmissions and ED recidivism when resources already were at capacity.
Last year, Cardinal Health announced the launch of a digital patient engagement platform aimed at addressing medication adherence challenges – a significant issue for the health industry and patients.
In 2019, Vizient collaborated with Civica Rx on provider needs analytics data to reduce Rx costs. By providing insights into purchasing patterns and provider needs through its analytics and data capabilities, Vizient helped Civica Rx anticipate gaps in drug availability and affordability.
ON THE RECORD“PatientBond brings consumer science and dynamic intervention technologies to healthcare with unmatched clinical and business results,” said PatientBond CEO Justin Dearborn in a statement. “Vizient’s member healthcare organizations can benefit from PatientBond’s personalized patient engagement at scale with proven and consistent results.”
Source Here: healthcareitnews.com
LifePoint Health Inks Data Deal With Health Catalyst
Brentwood, Tennessee-based LifePoint Health has entered a new collaboration with Health Catalyst and will use its analytics technologies to help bolster care quality, lower costs and improve population health management.
WHY IT MATTERSLifePoint Health will integrate Health Catalyst’s data operating system and analytics tools to gather performance metrics and drive improvements in healthcare quality, reporting and operational and financial decision-making.
By discovering and sharing clinical data, the partnership will help reduce variation in clinical outcomes. Health Catalyst’s tools dovetail with LifePoint’s national quality and facility recognition program goals to measurably improve patient care, safety and satisfaction as well as improve access and lower costs, according to the company.
In addition to the cloud-based data platform, LifePoint will use Health Catalyst’s analyzer, insights, AI, patient safety monitoring and data entry applications. The suite of tools can help increase organizational speed and interoperability, according to Health Catalyst.
THE LARGER TREND
While healthcare organizations are just beginning to scratch the surface of using data to drive improvements, according to Health Catalyst President Patrick Nelli, the company’s strategic acquisitions have provided them with the ability to customize software and services around core care systems.
One of them was its purchase earlier this year of KPI Ninja, whose event-driven data processing capabilities complement Health Catalyst’s own platform, enabling customers to build new services and operational tools around their core care systems.
LifePoint, meanwhile, has been making acquisitions of its own, such as its June 2021 addition of specialty hospital company Kindred Healthcare, with an eye toward a delivery network that taps into Kindred’s specialty hospital and rehabilitative expertise and its behavioral health platform.
ON THE RECORD“The Health Catalyst DOS platform, along with our technology product suites and applications, and improvement expertise, will best position LifePoint Health to achieve, sustain and scale the highest standards of care across its network,” said Health Catalyst CEO Dan Burton in a statement this week.
Andrea Fox is senior editor of Healthcare IT News.Email: email@example.comHealthcare IT News is a HIMSS publication.
Fifteen Months for Domestic Worker Who Stole Jewellery
On Thursday, a Palma court sentenced a domestic worker to fifteen months for the theft of jewellery from her employer, a woman in her eighties.
Between 2015 and the end of 2020, the 45-year-old Chilean worked two days a week at the woman’s home in Sa Indioteria, Palma. Over that period, she stole various items of jewellery. The woman only realised this at the end of 2020, which was when she reported the matter to the National Police.
The police established that these items, which included watches, rings and bracelets, were sold in gold-buying establishments in Palma. The woman later verified that these were hers. As well as the jewellery, a hearing aid was stolen.
In January 2021, the domestic worker was arrested. Described as being in an “irregular situation” in Spain, her lawyer obtained agreement for the sentence to be suspended so long as a sum of 10,700 euros is paid over three years, at a rate of 297 euros per month, and she does not commit another crime during this period.
Personal Finance8 months ago
MOOYAH Burgers, Fries & Shakes Plans to Add 5 Locations in St….
Health Care9 months ago
Tracking Transmission of Distinct SARS‑COV‑2 Variants From China and Europe to West Africa
Personal Finance8 months ago
Alella, a European Protected Designation of Origin (PDO), Known As The…
Business8 months ago
Hazle Twp. Franchisee to Reopen Sonic in Wilkes-Barre Twp.
Health Care8 months ago
From Ransomware to RansomOps: What You Need to Know About the Newest Threat
Health Care8 months ago
Clinical Messaging Platform Hospify to Close, Bupa Arabia Invests in Global Ventures, and More News Briefs
Family9 months ago
How to Track a Cell Phone Number on Google Maps Remotely Without Installing Any App?
Personal Finance8 months ago
Retailers Plan to Centralize Omnichannel Operations in 2022, According…