Introducing G2.ai, the future of software buying.Try now

Artificial Intelligence in Healthcare: Benefits, Myths, and Limitations

July 23, 2021
by Rachael Altman

Artificial intelligence (AI) is reinventing and reinvigorating the modern healthcare system by finding new links between genetic codes or driving robots that assist with surgery.

The AI sector is one of the world’s highest-growth industries. It was valued at $600 million in 2014 and is projected to reach $150 billion by 2026

Will AI and robotics replace humans in healthcare?

In June 2021, I completed an Artificial Intelligence in Healthcare course with MIT about how AI is and can be applied in the healthcare industry. 

There is endless potential for the benefits of AI in this sector—from the automation of administrative tasks to diagnosing and predicting diseases. It simplifies the lives of patients, doctors, and hospital administrators by performing tasks that are typically done by humans, but in less time, at a fraction of the cost, and, in some cases, with more accuracy. 

Although AI can do a lot of things, it can do a lot more with assistance from humans. So, do not worry, healthcare professionals, the robots are not coming for your jobs—they are here to help. 

That said, there are several other little-understood aspects of AI in healthcare—what it can and cannot do—that consumers or healthcare providers should be aware of.

Myths and misconceptions of AI in healthcare

Democratization

Misconception: you can take a premade computing model and apply it to any process.

This is possible in simple cases, but it is not always possible to take a preconstructed algorithm, apply it in a new situation, and expect perfect performance. Model performance can be drastically improved by adding generic and customized improvements to a premade algorithm.

However, the democratization of AI in healthcare is leading to greater access and more predictive workflow in patient care and smarter reactive responses to health issues. As the healthcare industry becomes increasingly open to AI-based technology applications throughout numerous points of care in healthcare, greater access to intelligent information has helped to simplify processes for patient monitoring, diagnostics, and treatment. The increased use of AI technology is creating helpful examples and learnings which in turn leads to greater adoption.

AI introducing bias

Misconception: AI introduces bias about a person or group of people based on their perceived group membership. There could also be bias due to incomplete data.

Despite how AI is sometimes portrayed, bias is not introduced by AI itself; biases are learned from biased healthcare data, and biased humans writing the algorithms. If properly trained, a machine learning model can actually mitigate bias and perform in a less biased way than a human might. There are two main ways that bias shows up in training data—either the data collected is unrepresentative of reality, or it reflects existing prejudices. 

For example, a new article in the Journal of the American Medical Informatics Association argues that such biased models may further the disproportionate impact the COVID-19 pandemic is having on people of color. 

The COVID-19 pandemic has had a hugely outsized impact on people of color, worsened by existing disparities in healthcare and systemic racism. Researchers flagged the danger in regarding AI as intrinsically objective, particularly when building models for optimal allocation of resources, including ventilators and intensive care unit beds.

Black-box concerns

Misconception: AI—deep learning in particular—operates in a black box

This would be concerning in cases where interpretability is important for clinical decision support in healthcare. However, interpretability in deep learning is improving, which makes predictions more explainable. If algorithms are operating in a black box, medical professionals cannot explain how the software reaches its decisions—which is a challenge for clinical decision making and the overall patient experience.

For example, AI might alert emergency dispatchers that someone has a cardiac arrest, but the clinicians cannot fully explain how the AI reached this conclusion. This lack of knowledge can be worrisome because a clinician cannot explain or fully interpret the diagnosis or treatment plan by the AI-powered system. And if consumers are going to entrust their health and safety to AI-assisted medical care, then they are going to want to understand how the AI arrived at these decisions. 

AI's limitations in healthcare

In addition to the myths and misconceptions, it is also important to be cognizant of AI’s limitations.

Differences in machines

Algorithms trained with data from one type of imaging machine may not perform in the same way when assessing instances from other types. Slight variations in the imaging machines or radiology software may be enough to negatively affect a model’s performance.

Reporting accuracy

The nature of your training data’s composition means that the accuracy of your model might not be as accurate as purported. Datasets in clinical studies, for example, may focus on a specific group. If a study on breast cancer only includes examples where the tumors are homogenous, the reporting accuracy may not transfer to other cases.

Fueled by increasing amounts of medical data, including those provided by electronic health records (EHR) and wearable fitness devices, and medical devices, such as pacemakers, that are available, researchers in academia and the pharmaceutical industry are turning to AI applications to improve clinical trials and clinical decision making, predict or diagnose disease with more accuracy, speed up medical advances, and expand access to experimental treatments. 

Related: 10 Healthcare Technology Trends to Improve Your Well-Being

Adversarial interventions

Machine learning algorithms are vulnerable in that they can be “tricked” with specially crafted inputs. A team at Harvard Medical School and MIT showed that it’s pretty easy to fool an AI system analyzing medical images.

Algorithms have been shown to be susceptible to the risk of adversarial attacks. Although somewhat theoretical, an adversarial attack describes an otherwise effective model that is susceptible to manipulation by inputs designed to fool them. For example, in one study, images of benign moles were misdiagnosed as malignant by adding adversarial noise or even just rotation. Adversarial noise is input that is designed to look "normal" to humans but causes misclassification to a machine learning model. 

Distributional shift

Your machine learning model may be unable to accurately apply what it has learned from training data to a novel set of data if there is a significant difference (or a “shift”) from one to the other. For example, if a model is trained with data from young patients in California, would it work on older patients in Australia? 

Distributional shift is familiar to many clinicians, who have to operate outside of a comfort zone when they cannot necessarily apply previous experience to new situations. Machine systems can be poor at recognizing a relevant change in context or data, and this results in the system making incorrect predictions based on “out-of-sample” inputs. A discrepancy between training and operational data can be introduced by deficiencies in the training data, but also by inappropriate application of a trained machine learning system to an unanticipated patient context.

Biased data

If there are biases in training data, the machine learning model will learn from them and incorporate them.

The intent is to use AI technology in healthcare to help providers make more objective, real-time decisions and provide more efficient care. AI tools rely on data to train machine algorithms to make decisions. If you want to teach a machine to estimate a factor such as the spread of a virus across various demographics, you feed it a lot of examples so the machine can learn how to identify these patterns. 

The machine is then able to make relevant distinctions. If the data are inherently biased or don’t contain a diverse representation of target groups, the AI algorithms cannot produce accurate outputs. 

For example, if the data used for an AI technology is gathered only from urgent care facilities, then the AI model will learn less about patient populations that do not typically seek care at urgent care facilities. The same could be said for using data from image analysis from white male patients and then attempting to apply the same model to black or latino male patients.

Biased data can lead to delayed or inappropriate patient care delivery that results in harmful patient outcomes. Therefore, it is important to ensure healthcare data used to train AI is representative of diverse groups to help mitigate potential harm to the public, especially to historically marginalized populations. 

Related: Patient Experience Matters and the Pandemic Has Improved It

What's next?

There is a great potential of AI being used in the healthcare systems, as well as challenges, myths, and limitations. Through machine learning approaches, access to structured patient data is rapidly growing. The insights you can glean from the healthcare data are expanding, allowing for better predictions, diagnoses, and treatments. This article is the first in a series about the benefits and challenges of AI in healthcare—and the shift to digital healthcare.

Want to learn more about Health Care Software? Explore Health Care products.

Rachael Altman
RA

Rachael Altman

Rachael is a research analyst at G2 with a focus on healthcare and education. Prior to joining G2, she has worked as an academic librarian and in research and business development at law firms, accounting firms, and nonprofit organizations. She has a BA and MA in English and Creative Writing and an MS in Library & Information Science. Outside of G2, Rachael is a career coach, yoga and meditation teacher, and jewelry maker.