Jane Reed, IQVIA

By Andrew Humphreys | [email protected]

MedAdNews talked to Jane Reed, director of life sciences with Linguamatics, an IQVIA company, about how pharma companies are using new AI-based technologies to advance their safety processes and other topics.

What are some of the game-changing ways that new AI-based technologies are poised to transform safety practices for lifescience organizations?

There are three primary categories that describe how AI-based technologies can improve drug safety processes for lifescience organizations. The first one involves automating some of the everyday, manual and repetitive tasks to reduce administrative burdens. This includes, for example, data entry, coding and mapping to MedDRA (the Medical Dictionary for Regulatory Activities) and identifying discrete pieces of data that are of interest for aggregate reporting.

Next is the ability for AI to transform safety from a cost center to a source of value. AI can be used to handle the massive amounts of data that flood into pharmaceutical companies to draw insights. In this respect, it’s not about obtaining safety data simply because you need to do a report, for a regulatory requirement, but, in addition, using that data to create insights that yield value in the drug development life cycle.

Finally, AI-based technologies will drive value in the realm of predictive analytics. For example, pharma companies can leverage AI technologies to analyze large volumes of post-market safety data to better understand patient responses, and then use that information to improve the next round of drug discovery and development for a particular indication.

How are some of the leading pharma companies using AI-based technologies to minimize manual processes and accelerate access to critical safety details during drug development and discovery?

Here’s one example: a top 50 pharma used natural language processing (NLP) to map adverse events to MedDRA. All pharma companies are required to report adverse events according to the terminology in MedDRA, which is often a manual and tedious process.

The challenge when performing MedDRA mapping manually is that there are many ways adverse events can be reported (particularly for rare disease indications), and MedDRA updates happen twice a year, so mapping and organizing this information can be a tedious, lengthy process. However, once pharma companies digitize that information and use NLP to extract and map the key data attributes, they can speed up the process, saving their teams’ time from monotonous coding tasks.

In what ways are companies and organizations going about finding the right (AI) tool for the right task?

To gain value from the range of AI technologies, many pharmaceutical companies are expanding their own data science teams. Pharma companies have a significant internal focus on upskilling their teams. At the same time, pharma companies are looking to trusted partners that have experience in the development and use of AI tools. So, generally, it’s about finding that right combination of building an internal team balanced with consulting with a trusted, deeply experienced partner to determine the most appropriate tool for any given task.

Please discuss how organizations use validated deterministic NLP to search and extract relevant content for adverse events, symptoms, patient histories, drug names, and more.

We’re at a time now when everyone is asking, “How can we use a large language model, ChatGPT, Bard, or whatever, to solve my problem?” However, a lot of common safety problems can be solved with standard deterministic NLP, particularly using ontologies that yield a huge amount of signal from the noise that comes in safety.

One example involves using linguistics to build rules to understand the context for any particular adverse event within a linguistic pattern to distinguish whether a compound is causing a disease or a compound is treating a disease. Being able to use those linguistic rules is valuable because it’s much cheaper computationally to use tried-and-trusted deterministic NLP, rather than trying to train what can be a computationally very expensive large language model, to find adverse events. It’s important to understand what is the right tool for the job. Deterministic, rules-based NLP can work well across a lot of different, common safety use cases. In contrast, large language models, such as Chat-GPT and Bard, can be useful for tasks like topic summarizations, where deterministic NLP is not as effective.

How can machine learning models further boost AE detection?

Machine learning can help with AE detection when researchers are looking for adverse events in some of the less standard sources of information. For example, everyone in the industry is very familiar with how to find adverse events in medical literature, via monitoring abstracts and so forth, because these are written using standard clinical language. Often, that type of work does not require highly advanced machine learning models.

In contrast, machine learning models can be helpful when looking at new sources of patient-reported outcomes and when trying to understand the voice of the patient in social media, blogs, and the like. The models are useful in capturing novel medical terms that are written in slang or non-clinical language, and then mapping that terminology to core medical terms to gain a better understanding of patient-reported adverse events. The technology learns and gets better with experience, so each subsequent round of adverse event detection will improve.

Please discuss how GPT-powered searches can rapidly review libraries of regulatory or safety guidance to find critical information.

Novel NLP tools can generate additional value by extracting, organizing, and synthesizing key safety data from internal and external information sources, such as regulatory and safety reports. The technology can make this information much more effectively accessible for people who aren’t trained for information retrieval.

For example, if a researcher is looking across a huge volume of internal safety reports, then being able to write a question such as, “What compounds have been shown to cause necrosis in kidneys in rats?” and have GPT create an answer is a much easier process for a user to understand and engage with, when contrasted with traditional methods of search. Of course, it is critical that users think carefully about how they ask questions to ensure that they aren’t getting hallucinations, biased answers, or other errors from large language models.

What are some of the positive results from adding GenAI to regulatory and safety workflows?

Obviously, one of the most exciting things about large language models is the ability to generate novel text. We can combine that with more deterministic NLP to surface the right information to feed into these generative text models and create human-readable text based on an analysis of regulatory and safety documents, for example.

So, the real value that we’re seeing with regulatory and safety teams is the ability to train large language models to assist with document drafting. Models need to be trained with source documents and output examples. The technology enables safety teams to create automatically generated sections of dossiers, safety case reports, clinical reports, technical documents, and so forth. Using generative AI in this way can significantly accelerate some of the workflows within regulatory affairs.

Safety and regulatory affairs are particularly useful fields for this combination of generative AI and NLP because they rely on extremely document-heavy processes. There is simply a huge volume of textual documents that must be analyzed in these areas, and tools like generative AI, machine learning, and NLP hold the ability to transform what were previously manual, labor-intensive workflows.

Ideally, as a result, that means that pharma companies are freeing up expert teams, who may currently be spending two-thirds of their time searching for and processing information, to instead think much more about how they can extract value from that information. In this way, the technology is helping to make safety and regulatory teams more effective and enable them to drive more value.