Yale study reveals disparities in hospital early warning systems
New research compared six tools to identify the most effective ways of predicting clinical deterioration.
Tim Tai, Senior Photographer
A new study from Yale New Haven Health System — YNHHS — reveals significant differences in the performance of early warning systems designed to predict clinical deterioration in hospitalized patients.
“Early identification of at-risk patients can be life-saving, but false alarms create alarm fatigue and waste valuable clinical resources,” Dr. Deborah Rhodes, chief quality officer for YNHHS and the study’s principal investigator, told the News.
Published on Oct. 15 in JAMA Network Open, the study compared six early warning systems across more than 360,000 patient encounters at five YNHHS hospitals. The results showed that eCART, a machine learning-based tool, outperformed other systems in accurately identifying patients at risk of deterioration, including the widely used Epic Deterioration Index. The study’s findings are likely to inform future decisions about which tools hospitals should implement to improve patient outcomes and reduce the rate of false alarms that can lead to alarm fatigue.
The study’s origins lie in YNHHS’s longstanding use of the Rothman Index, a tool that was widely adopted in many hospitals, including Yale’s. However, the tool’s utility came into question due to inconsistent responses to its alerts. At the same time, Epic, Yale’s electronic health record vendor, offered its own deterioration index at no extra cost to the health system. This prompted the team to compare several tools in use across the hospitals, including both Rothman and Epic’s systems, as well as others, such as the National Early Warning Score.
The study aimed to determine which tool would provide the most accurate early warnings for patients showing signs of deterioration. Rhodes noted that despite the intuitive appeal of the Epic Deterioration Index, which is integrated with the hospital’s records system, it sometimes failed to provide sufficient warning or fired too many false alerts. After reviewing the literature, the team decided to conduct a comprehensive analysis to assess the performance of multiple tools across thousands of patient encounters.
The eCART system, the only tool utilizing advanced machine learning, stood out for its superior accuracy and efficiency. In comparison, the Epic tool was found to over-alert significantly, which can lead to clinicians ignoring alarms. The eCART system identified more than 300 additional at-risk patients while reducing the number of false alerts by nearly 48,000 over the study period. These findings have important implications for healthcare providers, as alarm fatigue can diminish the effectiveness of even the most advanced systems.
Dr. Jonathan Siner, another co-author of the study, emphasized the importance of reducing false alarms to improve clinical workflows.
“If you have alerts that go off too frequently, people don’t pay attention to them,” he said.
The study aimed to address this issue by evaluating each system’s positive predictive value, which measures how likely it is that an alert accurately identifies a real problem. From a technical standpoint, the researchers focused on improving this metric as it is crucial for ensuring that clinicians respond to meaningful alerts. Reducing false warnings helps ensure that clinicians focus on the patients who need immediate attention.
The eCART system’s higher positive predictive value — 15 percent, compared to lower figures from other tools — means that clinicians are more likely to respond when the system flags a potential deterioration. The study team believes this accuracy could improve overall patient outcomes by allowing earlier intervention.
The researchers applied data analysis techniques to ensure a robust comparison between the tools. Dr. Chenxi Huang, co-author and associate research scientist at the Center for Outcomes Research and Evaluation at Yale, explained that methodologies such as time-matching and bootstrapping were used to compare the scores at the same time points for each tool.
Time-matching aligns data from different tools to the same time intervals, enabling a direct, simultaneous comparison, while bootstrapping randomly sampled data to ensure unbiased estimates and account for variations within the dataset. These methods ensured that the comparison was as fair and representative as possible, given the large dataset used in the study.
The study also highlights the potential for future improvements in early warning systems, particularly those incorporating machine learning. However, Siner noted that successful implementation of such tools depends not only on the technology itself but also on how it is integrated into hospital workflows. Nurses and physicians need clear guidance on how to respond to alerts, and hospitals must ensure that they have consistent processes in place.
“One of the most important things to realize about any AI tool in any hospital, in my opinion, is that a tool is just a tool, and it’s the process and the standardization that you put in place, and the training that you put in place around a tool that really determines its whether it value add,” Rhodes added.
She explained that YNHHS has adopted eCART across its hospitals and has developed clinical pathways to guide the response when the system flags a patient at risk. By standardizing the response, the hospital system aims to reduce the likelihood of missed warnings or inconsistent care.
Looking ahead, the researchers believe that external validation of these tools will be critical for ensuring their effectiveness in a range of healthcare settings.
Huang emphasized the need for transparency and continued analysis of machine learning-based systems.
“External validation is extremely important,” she said. “It’s important to ensure that these models are well validated tools and better integrated into [the hospitals’] clinical workflow.”
As hospitals increasingly rely on artificial intelligence and predictive analytics, findings like the YNHH study can help guide decisions about how best to implement and improve these technologies for the benefit of patients.
Yale New Haven Health is located at 789 Howard Ave.