AI and the RWE universe

, ,
Artificial intelligence, face

AI and the RWE universe

By Christiane Truelove

AI continues to be on everyone’s lips as the debate about the usefulness and the ethics of ChatGPT and other AI platforms and large language models continues – especially in light of the Writers Guild of America and SAG-AFTRA strikes in Hollywood. The adoption of generative AI platforms such as ChatGPT has been slow in healthcare research because of credibility, plagiarism concerns, and ethical considerations. But the use of AI in healthcare research continues to develop, with entities such as Google, MIT, and others springing up to fill the demand, which is driven by the need to grapple with increasing administrative burdens for physicians and an ever-expanding universe of data.

One area of healthcare research that is taking the use of AI very seriously is health economics outcomes research (HEOR) and the generation of real-world data (RWD) and real-world evidence (RWE) to fuel observational studies. While randomized controlled trials (RCTs) remain the gold standard in determining the clinical effectiveness of a drug, device, or procedure, observational studies using RWE continue to be evaluated as a potential alternative to RCTs  in some specific instances – such as a pandemic when observational studies can be used to get reliable answers without the time-consuming process of recruiting patients and waiting for outcomes.

At the May 2023 meeting of the International Society for Pharmaceutical Outcomes Research (ISPOR),  the use of AI in HEOR research was a hot topic, especially in the second plenary, “AI wants to Chat with You: Accept or Ignore?”

Mitchell Higashi, senior VP of HEOR at GeneDx, who co-moderated the plenary panel discussion, says though HEOR leads the field in the design and validation of survey instruments, there remain significant challenges in collecting data, particularly from patients. Large language models could be used to “facilitate a better patient experience for assessments of quality of life, disease progression, tolerability … Can we gather the same quality of information from an unstructured conversation?”

Panelist Mary Beth Ritchey, chief epidemiologist, FDA/CDRH/Office of Product Evaluation and Quality/Office of Clinical Evidence and Analysis, says while she has been using observational data for awhile, “relevance and reliability of the data source, as well as ensuring that we have fit for purpose for that specific question is really important.” 

According to panelist David Sontag, a professor of electrical engineering and computer science at MIT and part of the Institute for Medical Engineering & Science, the Computer Science and Artificial Intelligence Laboratory, and the J-Clinic for Machine Learning in Health, some of the significant obstacles HEOR scientists face in getting the information needed for observational studies is that  it is generally not in available sources such as claims data, but rather hidden in clinical text such as patient notes.

“One typically either does a very time consuming process of chart review, or if you’re lucky, you do that mostly time consuming process, and then a machine learning algorithm … extrapolates what you’ve done to much larger population,” Sontag says. “But generally, it’s many months-long effort in order to collect any piece of information you need to do your own study.”

Ritchey says by using AI to sort data, HEOR scientists can “quickly get to our variables for our other models that we’re doing for our outcomes. … This just a slog of time that it takes, especially for generating information around our confounders, which are critical for outcome analyses,” she says. “And if we can get that down to a smaller period of time, and have all of our stakeholders understanding that this is relevant information, I think that’s critical.” Additionally, AI can  help identify previously unknown confounders or other impacts to the relationship between the therapeutic and the outcome.”

The technology in today’s large language models such as ChatGPT makes it possible to bring that process down from few months to a couple of days, or even a few hours. “And it does so by changing the way that we think about how one extracts information from text,” Sontag says. Generative AI can take any one information extraction question and turn it into a generative question. Just as a clinician can read a patient note and give yes or no answers to questions such as if a patient had diabetes, AI can do the same thing.

Panelist Guillermo A. Cecchi directs the Computational Psychiatry and the Neuroimaging groups in IBM Research, and is associate director of analytics for NIH’s Advancing Medicines Partnerships – Schizophrenia. Currently he is involved in a project to gather information from chronic pain patients. Instead of bringing in the patients every couple of months and having them describe their pain on a scale of 1 to 10, “We are receiving information from the patient in a very natural way,” Dr. Cecchi says, by leaving a voice message or text message.

So far IBM has gathered 50,000 voice messages and about as many text messages, and is mapping them to describe in detail the pain patients are experiencing. “And with this model, you have the possibility of collecting evidence every single day,” Dr. Cecchi says.

Sontag says this ability to gather patient information every day is going to be paired with ambient technologies in the next five to 10 years. Right now, patient conversations with clinicians are “typically distilled down to a few words that are written down and largely lost” in medical records, he says. “So what this ambient technology is doing (and all of the major electronic medical record vendors are about to deploy this) is allowing clinicians to start recording the conversations between patient and provider. And that gives you a trail of breadcrumbs that allows you to get much more detailed information about what’s actually going on with patients.”

While the reason for ambient recording of these conversations is financial, to reduce the recordkeeping burden for physicians, “it also opens up huge opportunities for us as a community for extracting new information from that conversation,” Songtag says.

Panelists brought up concerns about implicit bias in AI due to the data that is trained on – essentially reinforcing biases that already exist in health care.

And “if one uses a large language model to try to extract information, naively speaking, you could miss information or it might try to hallucinate information that wasn’t present in the actual conversation,” Sontag says. But the HEOR community can work towards “validated instruments” in AI, “technology that is then validated in order and actually assessed not just once but [also] on a repeated basis over time for such a bias. And then with those checks and balances, I think you get to a safer deployment.”

Panelist Jacqueline Shreibati, M.D., senior clinical lead, Devices & Services (Consumer Health) at Google, says the challenge with a large language model or smaller language model “is that they are really trained to give a response, the most probable response, but the most probable isn’t always necessarily the most true, it may not be the most relevant in a particular situation. So the challenges that we face when we put a large language model into the hands of users, whether they are HEOR researchers or consumers, is we have to be concerned around groundedness or hallucination or consistency of the results. You might ask the question, five people might ask the same question,  and get a different response because of the non-deterministic way that these models can operate.”

To test these models for HEOR work, developers will have to do “red teaming” – bringing in groups of experts to test the AI and elicit incorrect responses. “I think it’s something that the HEOR community will be doing as they have access to more research APIs to do the work, you also need to check it to make sure you’re getting the intended output that you’re that you’re wanting,” Dr. Shreibati says.

AI and RWE in marketing

The same AI technology that can extract valuable patient insights and outcomes data for HEOR scientists can also be useful to pharma marketers, healthcare advertising agency experts say.

Sharlene Jenner, AbelsonTaylor

Sharlene Jenner, AbelsonTaylor

Sharlene Jenner, VP, engagement strategy at AbelsonTaylor, has been thinking a lot about artificial intelligence and generative AI in teaching classes at Southern Methodist University. “We talk a lot about the effects of data; we talk a lot about how this technology is going to change everything,” she says. “What’s really interesting is AI, in this way, changes every industry. And we haven’t really seen that platform shift since probably 2008, when smartphones came onto the scene, because that changed every industry a little bit.”

When looking at how AI improves HCPs’ ability to capture data, while EHR systems have been around for a long time, AI gives us the ability to improve the utilization of hidden data, identifying key insights and patterns. “Its key skill is being able to take unstructured data, analyze it, and put it in a pattern format,” Jenner says. “And when you think about all the data that is sitting in these electronic health records, like physician notes, clinical narratives, patient conversations, and all of this relevant data, AI systems can identify all of the information, look at key insights based on past diagnosis, treatment options … these things can get hidden in EHRs, because it’s so much data, and HCPs have a lot that they have to manage.” AI will also be able to find the insights that often are missed by human analysis, she says. “On the other side of this, it’s going to start personalizing those treatment plans, because it will have
patient-specific data. Right now we look at things as group or aggregated data.”

According to Alfred Whitehead, executive VP, applied sciences at Klick Consulting and Klick Labs, historically, pharmaceutical companies have been involved in more traditional types of machine learning for forecasting, drug development, and pharmacokinetics.

“But these new large language models, are really going to change how they relate to customers and how they gather data from large data sources,” Whitehead says. “And the reason being is that these large language models are really good at pulling out insights from text, which is something that is all over the place. And that’s something an AI could look at and understand now.

“We’ve had 15 to 20 years of EHR use in the U.S. But most of that data, as you probably know, is trapped in notes that have been put in by clinicians, by nurses, etc. Well, now we can start writing and harvesting that and .. it’s almost like a telescope, in some ways. We’ll go back in time and be able to see not just where this patient was the day you recorded their data, but maybe throughout their history, at least back to sort of the start of the EHR systems.”

Sarah Alwardt

Sarah Alwardt, Avalere

Sarah Alwardt, interim president of Avalere and practice director, says as researchers, “we’ve used administrative data for years and we’ve got a great deal of familiarity of what that looks like. And it’s all structured data. We know how to use it and even if it’s a format we don’t know, you can figure out the format that’s used.”

But EHRs have “a foot in two worlds,” Alwardt says. “There’s a foot in the world of structured data, with click boxes and buttons, and depending on the sophistication of the EHR system that’s used, can capture a lot of information. But then you have all that text. I had a physician once tell me that their goal was to find the first comment text box and dictate the entire patient note from the encounter.”

Alwardt says natural language processing “has such an amazing opportunity to structure these unstructured data — and  to get anything usable as a first step of any of this, you have to figure out how to structure it. That by itself unleashes a whole new world of data possibilities.”

Currently, with RWE and real-world data, “the real promise of these new large language models is to mine those [EHR]notes and turn it into nice columns of structured data, i.e., collect in a clinical trial that you can then analyze,” Whitehead says. Data collected in this way will probably never meet the same level of accuracy as a structured clinical trial, “but I think with this RWD, you’re going to get much more than now, and it’s mostly because going through these notes right now is not viable, because of the huge labor cost. And I think you’re going see some very good, maybe not gold standard evidence, but silver-standard evidence, if you will, coming out of that, where it’s very good. And in some patient populations, this may be the only viable way to study some of these conditions.”

According to Kellam, AI and ambient listening technologies could also help reduce healthcare bias by mining EHR data to show which patients were offered specific treatment options. “I’m curious to see when biologics are suggested to people of color,” she says. “Because it seems like they’re basically never being suggested. You have situations like lupus in which people of color are being pushed so many steroids and never being introduced to leveling up to better care. I’m really curious to see the data around, how much of that has been actually a patient-informed decision versus just withholding that information from an entire category of people.”

There is potential for using RWD and RWE in marketing as well as outcomes research. “Some of marketing is making claims about the product, and those claims have to be backed by the correct amount of evidence as per FDA [regulations],” Whitehead says. “This, in a very direct way, creates a new mechanism for companies to mine data for those claims to look at broad populations and see what’s been happening there, and maybe add some new claims to their products. That can be very helpful, to help differentiate products in a crowded marketplace … it’s really going to change how the conversation happens between companies and their customers, whether they’re patients or physicians.”

While companies are already having some of these types of conversations with physicians through medical science liaisons, “that only exists for those physicians where they have access to reps who have time for the FaceTime [call],” Whitehead says. He anticipates that a properly trained AI backed with deep expertise can provide “some very personalized communications for physicians, the ability for them to have an interaction with a system that knows you, [has] deep clinical knowledge about the drug, and also understands the regulations about what it can and can’t say, and when it has to refer to a human.”

Alfred Whitehead, Klick Health

Alfred Whitehead, Klick Health

This kind of data can also be used to tailor personalized support programs for patients that could drive better drug adherence. “We know that one of the biggest problems is that every drug you don’t take doesn’t work,” Whitehead says. “And anything we can do to get patients keeping on their therapies is going to really improve how effective the medications are in the real world.”

AI-generated RWE and RWD have the potential to expand evidence-based marketing, Jenner says. “If there are analytics that AI can uncover patterns, treatment options, or patient experiences, we can then create more evidence-based messaging, which then will target a larger audience. Or it might even target a smaller audience based on who that message can be provided to, which then leads us into my other favorite thing, which is personalization and AI.

“As marketers at an agency, we can utilize some AI-driven segmentation now, and some targeting based on characteristics and anonymized preferences and behaviors, we can create those messaging strategies that speak to a specific group or a specific set of individuals, or it can help us target channels that these patients will find themselves in”, such as online patient support groups.

“For us to be able to help guide them to find those groups, or target and segment areas around those and increase the effectiveness of our campaigns for our clients, that’s very helpful, because it helps us get the information to those who need it. And the speed of that cannot be underemphasized,” Jenner says.

“If we’re able to gain insights, quickly, with real-world data about how a indication is performing, or how a patient is receiving that treatment or care, or how an HCP is able to get that information… If we can understand the strengths of our clients’ products, if we can develop messaging that highlights those unique advantages… And if we can position them in the market, using AI-generated segmentation: what we’ve done is increased the speed to market, [and] a perfect message, and hit the target audience very quickly, which ultimately leads to better care.”

An agency such as AT can use AI to analyze large datasets just like HCPs and HEOR scientists do. “Think about being able to analyze hundreds of thousands of social media conversations, hundreds of thousands of online reviews, and then coupling that with real-world data and looking at that real-world evidence,” Jenner says. “We can then decide how = we look at a message and what resonates with our target audience because we’re looking at the target audience speaking and listening to that, we’re also able to make adjustments of that very quickly. Because we can do iterative messaging and iterative development of messaging, we can start to adjust as we start to read the market.”

Jenner says when agencies and their clients get access to these new sources of data, they have to make sure they are “ethically correct” in its use. “I always like to tell my team, yes, artificial intelligence is very important. But it’s the authentic intelligence of humans that layers on top of that, which is where the magic happens.” Jenner told Med Ad News. “It’s always going to be a partnership, there’s always going to be a driver and a passenger. Sometimes the driver might be AI, and the passenger might be the human to make sure that the information is used correctly or ethically. And sometimes the driver is the human with the AI being the tool that we harness. But it’s always going to be a partnership between the two.”

AI in the consult

The number one way AI is going to improve health care in the short term is with electronic health records, by reducing the administrative burden on physicians and putting in information accurately and quickly, Jenner says.

And that reality has come up very quickly. At the end of July, Amazon Web Services launched its HealthScribe service. AWS HealthScribe, which is in preview,  is a HIPAA-eligible service that healthcare software vendors can use to build clinical applications that automatically generate clinical notes by analyzing patient-clinician conversations. AWS HealthScribe combines speech recognition and generative artificial intelligence to reduce the burden of clinical documentation by transcribing patient-clinician conversations and generating easier-to-review clinical notes. HealthScribe analyzes consultation conversations to generate summarized clinical notes for sections such as chief complaint, history of present illness, assessment, and treatment plan. Additionally, it categorizes transcript dialogues based on their clinical relevance, such as small talk, subjective, and objective.

Some physicians want ambient recording and AI for record keeping, because of the time it will save them, Whitehead says. “And potentially the records you’re going to get are more accurate.”

Corina Kellam, Ogilvy Health

Corina Kellam, Ogilvy Health

Corina Kellam, executive VP who leads experience innovation at Ogilvy Health, says while physicians all have different ways of handling their practice notes, “ultimately, there’s always a commitment and time to make those notes, whether you use a transcription company or whether you’re sitting there actually plugging things in yourself.”

Ogilvy Health did a panel at SXSW in March on the mental health of doctors. What surprised Kellam is how much of physicians’ depression and frustration was linked to the amount of time that they have to commit to clinical notes and EHR paperwork.

AI “takes the paperwork burden away, and not only does it take away the paperwork burden, it dramatically expands what that documentation is going to include,” Kellam says.

Alwardt has spoken with physicians about how much documentation is required for a visit and the time they spent in EHRs. “It becomes an exercise in administration rather than an exercise of medicine,” she says, adding that the ability of AI to take EHR information and be able to use it effectively, as well as get things documented “is a great step forward and a letting the physicians practice medicine rather than practice their writing skills.”

When it comes to what exactly the physician is able to note about their conversations with patients in the EHR, “how much is lost between the time from the spoken word into the written word, because you’re having to move so quickly, you’re having to write so many things down,” Jenner says. “But with an AI speech recognition system, it’s going to convert that language assembly simultaneously as someone is speaking … The integration of these transcriptions can help in multiple ways, it’s going to want to eliminate that need for manual data entry by physicians. It also reduces the errors and omissions.”

While a physician might be taking notes, they are usually not thinking about all the other things that are in the background of a patient, such as the demographics of where they live, their medical history, or lab results that have happened six months earlier. But AI can present all this data in a structured manner into the EHR, according to Jenner.

“This is going to allow physicians and HCPs to improve this data accessibility and retrieve information very quickly. So that’s where I feel that there’s definitely a huge bump from a transcription perspective, because you’re getting the ability of having not just transcription, but data management.”

Chris Truelove, Med Ad News Chris Truelove is contributing editor of PharmaLive and Med Ad News.