ARTICLE

Exploring the promise of artificial intelligence in healthcare

An illustration of three healthcare professionals looking at a head with an artificial intelligence which is touched by a roboter arm

Artificial intelligence (AI) has been described as ‘the defining technology of our time,’ and investment in the market is projected to increase rapidly in the next decade.1 One of the biggest areas of AI investment globally is the healthcare sector, with tools promising improvements in administrative efficiency and diagnostic accuracy.2,3

At this year’s Healthcare Information and Management Systems Society (HIMSS) Global Health Conference & Exhibition, Okan Ekinci, MD, Chief Medical Information Officer for Roche Information Solutions, Dr. Bobby Reddy Jr., Chief Executive Officer of Prenosis, Dr. Peter McCaffrey, Chief AI Officer, University of Texas Medical Branch, and Dr. Vishakha Sharma, Senior Principal Data Scientist Roche Information Solutions, joined LabLeaders at our first in-person event to discuss the promise of AI, and considerations for a safe and sustainable implementation in a healthcare setting.

Article highlights:

  • Healthcare is one of the biggest areas of investment for AI globally.
  • Applications of AI in the healthcare sector bring new possibilities for data sources, along with the challenge of managing this data.
  • Implementation of AI tools in a healthcare setting is not straightforward and requires careful consideration.
animated speaking bubble

Join our community and stay up to date with the latest laboratory innovations and insights.

Subscribe now

Potential to meet increasing demands on healthcare providers

AI technologies have been proposed and developed to inform or perform a variety of tasks within healthcare settings, including administrative functions, clinical decision-making, and health data analysis. The broad range of tools available could drive efficiencies and therefore help ease staffing issues, which are a critical challenge faced by many organizations today. “No matter whether you’re a lab person or a provider person, you see staff shortages actually impacting your deliverables – meaning care,” says Dr. Okan Ekinci.

This is compounded by expectations of what care should look like, explains Dr. Peter McCaffrey. “What people really want out of healthcare is access to information and guidance. They want it everywhere, they want it all the time, and they want it about them. And they deserve to have that.” However, Dr. McCaffrey acknowledges that this is not possible with current staffing levels: “Even if we trained every clinical workforce member we could think of training today, and over the next decade, we would not have enough people to come close to doing that. It is only possible with an augmented workforce.4 One that's augmented with efficiencies of scale and automation, and that includes automating aspects of thought work as well, through AI.”

Adapting to the changing nature of data

With care at home, or virtual care, becoming more prevalent after the COVID-19 pandemic, the decentralization of care means healthcare systems will increasingly have access to data that is outside of the traditional scope. “You don’t necessarily have predefined rails for this data,” says Dr. McCaffrey, noting an example of an AI model that looks at keystrokes on keyboards as a way to diagnose early signs of Parkinson’s Disease.5 Dr. McCaffrey suggests healthcare systems must change the way they view available data, “We have to really think differently about how patients broker their own data and how we integrate with data sources that are not conventional to us. If all we look at are things we collected in the hospital, we are sort of navel-gazing at risks that we already know about.”

Dr. Ekinci agrees, “The whole patient is becoming a “data donor”, and we need to understand how we bring those data into a care environment, and how we deliver lab services to patients that are at home. Looking into the role of data and AI in this context is imperative.” Dr. Ekinci continues, “In environments like cancer care, we have multimodality data. For example, lab data, genomic data, tumour mutation data, etc. All these data need to be brought together and contextualized with already existing data to create novel insights. Insights that inform treatment decisions, predict complications, or early detect disease progression.”

Bobby Reddy Jr. is conscious of this and is focused on delivering a powerful AI model that maximizes insights from available multimodality data. The result? As the first FDA-approved AI model for sepsis management, Prenosis predicts the likelihood of a patient developing sepsis. He explains, “Instead of focusing only on a few lab tests to try to make a decision, why not combine biomarkers, vitals, labs, demographic parameters all together to really try to understand what's happening with the patient right now, but also what's going to happen to the patient in the future?”

Data in, data out

The wealth of data available doesn’t necessarily mean better results. Dr. Reddy cautions that AI is only as powerful as the data that goes into it: “No matter how good the AI model is, if the data isn't relevant, and if it isn't high quality, then you're not going to get the output you want.”

Dr. Vishakha Sharma notes that another critical element is how data is organized in the first place: “We have to make sure that the data is actually available, and organized in, for example, a data lake where the analyst can have access to build a robust model.” Data also needs to be standardized across models to ensure consistency, all of which requires investment and resources. 

Although it has been suggested that generative AI could play a role in data organization, Dr. McCaffrey cautions against this as it has the potential to “organize your data in a way that's just misleading. It still works in downstream models and propagates that misleading nature.” For this reason, Dr. McCaffrey explains that for generative AI to work well in healthcare, there must be guardrails in place, and human input will be critically important at every iteration of model development and growth: “We still need to have data standards in place across the board. If you want to do high-level analytics, the error rates that we can tolerate are far less than what we tolerate in single instances and error compounds across all those instances when they are aggregated into high level analyses.”

Looking at different data domains, the lab is uniquely placed for the future development of AI-enabled tools and models. From an IVD lab and molecular diagnostics, sequencing, to digital pathology, and diagnostic imaging, the term “lab” encompasses a broad range of subdomains that play an increasing role in the training of medical AI models. While AI can handle many administrative tasks such as clinical documentation, “AI can’t make lab results,” clarifies Dr. McCaffrey, “This is the fodder that AI sits on – imaging and lab results. So, I think we’re going to have a massive role in the center of that in the future”.

Practicalities of implementation

Despite increased investment in AI in healthcare along with technological advancements, clinical uptake of tools is limited.5 Perhaps because the implementation of such technologies requires careful consideration.3 “The main concern we encounter is whether this type of tool is actually going to make an impact, and improve things for patients,” explains Dr. Reddy.

However, the answer to that question is not straightforward. Consideration must be given to whether the tool will work in the patient population and have the required diagnostic accuracy or predictive power within that group. Dr. Reddy suggests one way to alleviate concern here is to look for products that are externally validated when it comes to clinical performance: “When you go to an external third party that's unbiased and has access to all the other technology out there, that type of external validation is critical for building trust.”

Before any new technologies are implemented, organizations should also think about what changes will be required to current workflows and the metrics they will use to measure the impact of the tool. “Let's say you have a new medical AI algorithm that gets implemented. What are other variables that can change? Do the staff need certain training? Maybe they have to change certain workflows. We want to make sure that in any healthcare ecosystem, if that algorithm gets deployed, then the staff confirms that the AI model is working the way it should be working,” explains Dr. Sharma, who proposes tracking the rate of adoption as an additional performance measure.

Dr. McCaffrey agrees, “It's a very good one, rate of adoption, because a lot of times in institutions we don't reflect on what happens after you turn the tool on. How many use it, who uses it, and why?” Financial metrics are also useful, for example, whether the tool will result in cost savings or if it will provide a new service to increase revenue. “Take screening, for example, the argument is that it's actually going to drive new money in, versus cost savings,” explains Dr. McCaffrey.

In some circumstances, external metrics need to be considered. “A lot of health systems today are very worried about things like 30-day readmission that are then associated with not only those quality metrics, but potential financial penalties,” acknowledges Dr. Reddy. “The impact on financials is a concern to almost every health system. So, what is the reason to believe that whatever money I'm putting into this tool, I'm going to get it back two times, three times, five times because of those improvements in care?”

Ultimately, organizations should at least have a KPI before launching the product, says Dr. McCaffrey, “Crystallize it to what needle am I going to move, and within what time frame?”

Achieving common goals

The short innovation cycle in AI makes it difficult for stakeholders in the healthcare ecosystem to operate in isolation. Dr. Ekinci believes transforming the healthcare enterprise into one that embraces AI and creates value from a clinical, operational, and financial perspective requires collaboration. “I think there is a lot of commonality in understanding the risks, understanding the opportunities, and understanding how we should approach it. Therefore, interaction between industry players, academia, policymakers, and all stakeholders becomes even more warranted for long-term success.”
 

The recording from our LabLeaders Sessions at HIMSS25 will be avaliable soon! Sign up today to get the recording delivered directly to your mailbox!

animated speaking bubble

Join our community and stay up to date with the latest laboratory innovations and insights.

Subscribe now

  1. UN trade & development. (2025). Article available from https://unctad.org/news/ai-market-projected-hit-48-trillion-2033-emerging-dominant-frontier-technology [Accessed April 2025]
  2. University of York. (2025). Article available from https://www.york.ac.uk/news-and-events/news/2025/research/burden-ai-healthcare-white-paper/ [Accessed April 2025]
  3. British Medical Association. (2024). Article available from https://www.bma.org.uk/media/njgfbmnn/bma-principles-for-artificial-intelligence-ai-and-its-application-in-healthcare.pdf [Accessed April 2025]
  4. Safavi and O’Neal. (2023). Nurse Leader 21, 473-477. Paper available from https://www.nurseleader.com/article/S1541-4612(23)00055-1/fulltext [Accessed April 2025];
  5. Massachusetts Institute of Technology. (2015). Article available from https://news.mit.edu/2015/typing-patterns-diagnose-early-onset-parkinsons-0401[Accessed April 2025]