UPMC pilots machine learning, telehealth to inform patient transfers
Thousands of patients each year are transferred between UPMC’s hospitals for high-acuity, complex medical care.
To ensure patients are fully informed before those transfers, the Pittsburgh health system is now piloting a new care process aided by a machine-learning tool.
While it’s sometimes necessary to transfer patients for more specialized care, that transfer can come with unintended consequences—like moving the patient far away from their family and other support systems, a particularly difficult decision for patients close to death and who may not want to spend their final days in the hospital. It’s important for clinicians to discuss such decisions with patients to make sure they understand the severity of their illness and align next steps with what the patient wants.
So to ensure those conversations are taking place, researchers at UPMC and the University of Pittsburgh School of Medicine developed a machine-learning algorithm that predicts mortality for patients who may be transferred to another hospital for a higher level of care. Patients deemed at highest risk are flagged for more in-depth discussions about their care goals.
Researchers published a study validating the algorithm, dubbed Safe Non-elective Emergent Transfers, or SafeNET, in the journal PLOS One earlier this month.
The SafeNET algorithm evaluates 14 variables, including age and vital signs, to assess a patient’s risk of death.
If a patient is deemed at high risk, it kicks off two processes: a three-way conversation between an emergency department physician, intensive-care unit physician at the possible transfer facility and a palliative-care clinician, as well as telehealth palliative-care services between the patient and family members to discuss goals, expectations and options for next steps.
Dr. Daniel Hall, medical director for high-risk populations and outcomes at the UPMC Wolff Center and an author on the study, stressed that the algorithm doesn’t make patient-care decisions. It’s meant to trigger a “pause,” during which physicians and patients talk in detail before making decisions on whether to make a transfer.
The SafeNET algorithm is currently being piloted in three EDs at UPMC. Since November, the algorithm has flagged 11 patients who had the highest probability of dying. After conversations with the palliative-care team, four of the patients ultimately decided to continue with ICU-level care and seven decided not to be transferred.
The seven patients “decided, all things considered—their goals of care, their personal values, what’s important to them—to stay locally,” said Dr. Karl Bezak, medical director for palliative care at UPMC Presbyterian and Montefiore hospitals. Instead of higher-acuity care farther from home, some of those patients chose options like at-home hospice.
The mortality risk score isn’t discussed with the patient; it’s just used to identify which patients should have the conversations.
Algorithms like SafeNET could prove a promising way to remind physicians to loop in palliative-care services before making care decisions, said Lori Bishop, vice president of palliative and advanced care at the National Hospice and Palliative Care Organization. Often, hospitals don’t have a standard approach for identifying patients who could benefit from palliative care, she said.
Including palliative care clinicians in decision-making helps to make sure care is patient-centered. “Sometimes, medicine can be a ‘run-away train’ because we make the assumption you want everything done possible until you die,” Bishop added. “What we’ve found is that people don’t always want that option, and sometimes regret that their time was spent in hospitals.”
Health systems like UPMC have built mortality risk-assessment tools for various uses. Researchers at Geisinger Health also in February published a study that found a machine-learning algorithm they developed could predict mortality within a year based on echocardiogram videos of the heart, which could help to inform physicians’ treatment decisions.
When integrating decision-support tools that use artificial intelligence into clinical care, it’s important to make sure the tools are developed and tested on high-quality data from diverse populations, as well as evaluated for possible biases, said Satish Gattadahalli, director of digital health and informatics in advisory firm Grant Thornton’s public sector business. He also highlighted the need to subject algorithms to peer review and design systems so clinicians understand how the tools make recommendations, and the algorithm isn’t a “black box.”
“Make sure there are sufficient guardrails,” Gatta