Please consider supporting us by disabling your content blocker.
loader

AI in Healthcare

Major companies are moving at high speed to capture the promises of artificial intelligence in healthcare while doctors and experts attempt to integrate the technology safely into patient care.

Why is AI Important in Healthcare?

“Healthcare is probably the most impactful utility of generative AI that there will be,” Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to enhance drug discovery in the pharmaceutical industry, among other investments in healthcare companies, declared at the company’s AI Summit in June.

Other tech names such as Amazon (AMZN), Google (GOOG, GOOGL), Intel (INTC), and Microsoft (MSFT) have also emphasized AI’s potential in healthcare and landed partnerships aimed at improving AI models.

How is AI Being Used in Healthcare?

Growing demand for more efficient healthcare operations has had tech companies racing to develop AI applications that assist with everything from appointment scheduling and drug development to billing to assisting with reading and interpreting scans.

Intel CEO Pat Gelsinger demonstrates an AI-powered ultrasound monitoring heartbeats in real time during his keynote speech at Computex 2024 in Taipei on June 4, 2024.
Intel CEO Pat Gelsinger demonstrates an AI-powered ultrasound monitoring heartbeats in real time during his keynote speech at Computex 2024 in Taipei on June 4, 2024. (I-HWA CHENG/AFP via Getty Images)

The overall market for AI in healthcare is expected to grow to $188 billion by 2030 from $11 billion in 2021, according to Precedence Research. The market for clinical software alone is expected to increase by $2.76 billion from 2023 to 2028, according to Technavio.

What are the Concerns?

Practitioners, meanwhile, are preparing for the potential technological revolution. Sneha Jain, a chief fellow at Stanford University’s Division of Cardiovascular Medicine, said that AI has the potential to become as integrated with the healthcare system as the internet while also stressing the importance of using the technology responsibly.

“People want to err on the side of caution because in the oath of becoming a doctor and for healthcare providers, it’s, ‘First, do no harm’,” Jain told Yahoo Finance. “So how do we make sure that we ‘First, do no harm’ while also really pushing forward and advancing the way AI is used in healthcare?”

Potential patients seem wary: A recent Deloitte Consumer Health Care survey found that 30% of respondents stated they “don’t trust the information” provided by generative AI for healthcare, compared to 23% a year ago.

The Fear: ‘Garbage in, Garbage out’

Patients seem to have reason to doubt AI’s current capabilities. A study published in May evaluating large multimodal models (LMMs), which interpret media like images and videos, found that AI models like OpenAI’s GPT-4V performed worse in medical settings than random guessing when asked questions regarding medical diagnosis.

“These findings underscore the urgent need for robust evaluation methodologies to ensure the reliability and accuracy of LMMs in medical imaging,” the study’s authors wrote.

Dr. Safwan Halabi, vice chair of imaging informatics at Lurie Children’s Hospital in Chicago, highlighted the trust issues surrounding the implementation of these models by comparing the use of AI in healthcare without testing to the use of self-driving cars without conducting proper test drives.

Doctors and nurses prepare a knee surgery tool that uses artificial intelligence (AI) at a hospital in Bandung, Indonesia, on June 24, 2024.
Doctors and nurses prepare a knee surgery tool that uses artificial intelligence (AI) at a hospital in Bandung, Indonesia, on June 24, 2024. (Ryan Suherlan/NurPhoto via Getty Images)

In particular, Halabi expressed concern about bias. If a model was trained using health data about Caucasians in Northern California, he said, it may not be able to provide appropriate and accurate care to African Americans on the South Side of Chicago.

“The data is only as good as its source, and so the concern is garbage in, garbage out,” Halabi told Yahoo Finance. He added that “good medicine is slow medicine” and stressed the importance of safety tests before putting the technology into practice.

But others, like Dr. John Halamka, president of the technology initiative Mayo Clinic Platform, emphasized AI’s potential to leverage the expertise of millions of doctors.

“Wouldn’t you like the experience of millions and millions of patient journeys to be captured in a predictive algorithm so that a clinician could say, ‘Well, yes, I have my training and my experience, but can I leverage the patients of the past to help me treat patients of the future in the best possible way?’” Halamka told Yahoo Finance.

Creating Guardrails

Following an executive order signed by President Biden last year to develop policies that would advance the technology while managing risks, Halamka and other researchers have argued for standardized, agreed-upon principles to assess AI models for bias, among other concerns.

“AI, whether predictive or generative, is math not magic,” Halamka said. “We recognize the need to build a public-private collaboration, bringing government, academia, and industry together to create the guardrails and the guidelines so that anyone who is doing healthcare and AI has a way of measuring: Well, is this fair? Is it appropriate? Is it valid? Is it effective? Is it safe?”

They also stressed the need for a national network of assurance laboratories — spaces to test the validity and ethics of AI models. Jain is at the center of these discussions, as she’s starting one such laboratory at Stanford.

“The regulatory environment, the cultural environment, and the technical environment are ripe for this kind of progress on how to think about assurance and safety for AI, so it’s an exciting time,” she said.

An 81-year-old patient undergoes an injected brain scanner to detect a stroke.
An 81-year-old patient undergoes an injected brain scanner to detect a stroke. (BSIP/Education Images/Universal Images Group via Getty Images)

However, training these models also introduces privacy concerns, given the sensitive nature of a patient’s medical data, including their full legal name and date of birth.

While waiting for rules around the technology to be formalized, Lurie Children’s Hospital has instituted its own regulations regarding AI use within its practice and has worked to ensure it doesn’t disclose patient information on the internet.

“My predictions are [that AI is] here to stay, and we’re going to see more and more of it,” said Halabi, the vice chair of imaging informatics at Lurie. “But we won’t really notice it because it’s going to be happening in the background or it’s going to be happening in the whole care process and not overtly announced or disclosed.”

Click here for in-depth analysis of the latest health industry news and events impacting stock prices