Michelle Foley

Picture a world where healthcare is not confined to a clinic. 

The watch on your wrist ticks steadily throughout the day, collecting and transmitting information about your heart rate, oxygen saturation and the levels of sugar in your blood. Sensors scan your face and body, making inferences about your state of health.

By the time you see a doctor, algorithms have already synthesized this data and organized it in ways that fit a diagnosis, detecting health problems before symptoms arise. 

We aren’t there yet, but, according to Harlan Krumholz, a professor of medicine at the School of Medicine, this could be the future of healthcare powered by artificial intelligence.

“This is an entirely historic juncture in the history of medicine,” Krumholz said. “What we’re going to be able to do in the next decades, compared to what we have been able to do, is going to be fundamentally different and much better.” 

Over the past months, Yale researchers have published a variety of papers on machine learning in medicine, from wearable devices that can detect heart defects to algorithms that can triage COVID-19 patients. Though much of this technology is still in development, the rapid surge of AI innovation has prompted experts to consider how it will impact healthcare in the near future. 

Questions remain about the reliability of AI conclusions, the ethics of using AI to treat patients and how this technology might transform the healthcare landscape. 

Synergy: human and artificial intelligence at Yale

Two recent Yale studies highlight what the future of AI-assisted health care could look like. 

In August, researchers at the School of Medicine developed an algorithm to diagnose aortic stenosis, a narrowing of a valve in the body’s largest blood vessel. Currently, diagnosis usually entails a preliminary screening by the patient’s primary care provider and then a visit to the radiologist, where the patient must undergo a diagnostic doppler exam.

The new Yale algorithm, however, can diagnose a patient from just an echocardiogram performed by a primary care doctor.

“We are at the cusp of doing transformative work in diagnosing a lot of conditions that otherwise we were missing in our clinical care,” said Dr. Rohan Khera, senior author of the study and clinical director of the Yale Center for Outcomes Research & Evaluation, CORE. “All this work is powered by patients and their data, and how we intend to use it is to give back to the most underserved communities. That’s our big focus area.”

The algorithm was also designed to be compatible with cheap and accessible handheld ultrasound machines, said lead author Evangelos Oikonomou, a clinical fellow at the School of Medicine. This would bring first-stage aortic stenosis testing to the community, instead of being limited to those that are referred to a skilled and potentially expensive radiologist. It could also allow the disease to be diagnosed before symptoms arise. 

In a second study, researchers used AI to support physicians in hospitals by predicting COVID-19 outcomes for emergency room patients — all within 12 hours. 

According to first author Georgia Charkoftaki, an associate research scientist at the Yale School of Public Health, hospitals often run out of beds during COVID-19 outbreaks. AI-powered predictions could help determine which patients need inpatient care and which patients can safely recover at home.

The algorithm is also designed to be adaptable to other diseases. 

“When [Respiratory Syncytial Virus] babies come to the ICU, they are given the standard of care, but not all of them respond,” Charkoftaki said. “Some are intubated, others are out in a week. The symptoms [of RSV] are similar to COVID and so we are working on a study for clinical metabolomics there as well.”

However, AI isn’t always accurate, Charkoftaki admitted.

As such, Charkoftaki said that medical professionals need to use AI “in a smart way.” 

“Don’t take it blindly, but use it to benefit patients and the discovery of new drugs,” Charkoftaki told the News. “You always need a brain behind it.” 

Machines in medicine

Though the concept of artificial intelligence has existed since mathematician Alan Turing’s work in the 1950s, the release of ChatGPT in November 2022 brought AI into public conversation. The chatbot garnered widespread attention, reaching over 100 million users in two months.

According to Lawrence Staib ENG ’90, a professor of radiology and biomedical engineering, AI-powered healthcare does not yet consist of asking a sentient chatbot medical questions. Staib, who regularly uses machine learning models in his research with medical imaging, says AI interfaces are more similar to a calculator: users input data, an algorithm runs and it generates an output, like a number, image, or cancer stage. The use of these algorithms is still relatively uncommon in most medical fields.

While the recent public conversation on AI has centered around large language models — programs like ChatGPT which are trained to understand text in context rather than as isolated words — these algorithms are not the focus of most AI innovation in healthcare, Staib said. 

Instead, researchers are using machine learning in healthcare to recognize patterns humans would not detect. When trained on large databases, machine learning models often identify “hidden signals,” said David van Dijk, an assistant professor of medicine and computer science. In his research, van Dijk works to develop novel algorithms for discovering these hidden signals, which include biomarkers and disease mechanisms, to diagnose patients and determine prognosis. 

“You’re looking for something that’s hidden in the data,” van Dijk said. “You’re looking for signatures that may be important for studying that disease.” 

Staib added that these hidden signals are also found in medical imaging. 

In a computerized tomography — or CT — scan, for example, a machine learning algorithm can identify subtle elements of the image that even a trained radiologist might miss. 

While these pattern recognition algorithms could be helpful in analyzing patient data, it is sometimes unclear how they arrive at conclusions and how reliable those conclusions are. 

“It may be picking up something, and it may be pretty accurate, but it may not be clear what it’s actually detecting,” Staib cautions.

One famous example of that ambiguity occurred at the University of Washington, where researchers designed a machine learning model to distinguish between wolves and huskies. Since all the images of wolves were taken in snowy forests and all the images of huskies were taken in Arizona, the model learned to identify the species based on their environment. When the algorithm was given an image of a husky in the snow, it was always classified as a wolf. 

To address this issue, researchers are working on explainable artificial intelligence: the kind of program, Staib said, that “not only makes a judgment, but also tells you how it made that judgment or how confident it is in that judgment.”

Experts say that the goal of a partnership between human experts and AI is to reduce human error and clarify AI’s judgment process. 

“In medicine, well-intended practitioners still sometimes miss key pieces of information,” Krumholtz said.

Algorithms, Krumholtz said, can make sure that nothing “falls through the cracks.” 

But he added the need for human oversight will not go away. 

“Ultimately, medicine still requires intense human judgements,” he said. 

Big data and its pitfalls

The key to training a successful machine-learning model is data — and lots of it. But where this data comes from and how it is used can raise ethical questions, said Bonnie Kaplan, a professor of biostatistics and faculty affiliate at the Solomon Center for Health Law and Policy at Yale Law School.

The Health Insurance Portability and Accountability Act, or HIPPA, regulates patient data collected in healthcare institutions, such as hospitals, clinics, nursing homes and dentists offices, Kaplan said. If this data is scrubbed of identifying details, though, health institutions can sell it without patient consent. 

This kind of scrubbed patient information constitutes much of the data with which health-related machine learning models are trained

Still, health data is collected in places beyond healthcare institutions, like on period tracking apps, genetics websites and social media. Depending on the agreements that users sign — knowingly or not — to access these services, related health data can be sold with identifying information and without consent, experts say. And if scrubbed patient data is combined with this unregulated health data, it becomes relatively easy to identify people, which in turn poses a serious privacy risk.

“Healthcare data can be stigmatizing,” Kaplan told the News. “It can be used to deny insurance or credit or employment.”

For researchers, AI in healthcare raises other questions as well: who is responsible for regulating it, what privacy protections should be in place and who is liable if something goes wrong.

Kaplan said that while there’s a “general sense” of what constitutes ethical AI usage, “how to achieve [it], or even define the words, is not clear.”

While some, like Krumholz, are optimistic about the future of AI in healthcare, others like Kaplan point out that much of the current discourse remains speculative. 

“We’ve got all these promises that AI is going to revolutionize healthcare,” Kaplan said. “I think that’s overblown, but still very motivating. We don’t get those utopian dreams, but we do get a lot of great stuff.”

Sixty million people use ChatGPT every day.

HANNAH MARK
Hannah Mark covers science and society and occasionally writes for the WKND. Originally from Montana, she is a junior majoring in History of Science, Medicine, and Public Health.
VALENTINA SIMON
Valentina Simon covers Astronomy, Computer Science and Engineering stories. She is a freshman in Timothy Dwight College majoring in Data Science and Statistics.