Balancing innovation with patient safety: navigating regulatory guidelines in clinical research

Artificial intelligence, face

Balancing innovation with patient safety: navigating regulatory guidelines in clinical research

By Gary Shorter, IQVIA

The utilization of artificial intelligence (AI) and machine learning (ML) is rapidly expanding across industry landscapes. However, as the excitement of these developments grows, regulatory bodies are examining the various risks that accompany these innovations. For example, the healthcare industry is acutely aware of the elevated compliance concerns involved with this technology and its governing organizations are taking swift action to address these challenges. In fact, the latest guidelines proposed by international regulatory organizations such as the FDA, European Union (EU), and European Medicines Agency (EMA) are poised to reshape the lifesciences industry, especially for clinical trials.

The latest development in Europe, the EU AI Act, suggests a “risk-based approach” when using AI and demands transparency from providers. Similarly, both the FDA and EMA foreshadow regulations in their latest publications, calling for a human-centric approach and transparency in identifying risks such as biases, inaccuracies, and incompleteness of data that potentially interfere with patient safety and trial results.

The following proposals will provide a new set of standards to guide the use of AI and ML across the industry.

Safety embedded in clinical trials

Although certain regulations and proposals are new, the concept of safeguarding the use of technology in clinical trials is not. For years, clinical research has abided by strict guidelines around ensuring proper use of new technologies. Since the creation of the Harmonized Tripartite Guideline for Good Clinical Practice in 1996, clinical trials have followed uniform standards to protect the rights of scientific subjects and ensure creditability and reliability of data. Though technological advancements may prompt new methods of ensuring safe use of patient information, the principles and integrity of proper data management and governance remain the same.

The last known active, detailed iteration of guidelines affecting clinical trials, introduced in the FDA’s Software as a Medical Device (SaMD) regulations, provides risk categories for software and algorithms used in treatment planning and monitoring. In identifying risks, organizations can then identify good practice standards that must be applied to the level of design, development, and validation required prior to the distribution of the service.

Following suit to these new requirements, we have developed strict operating procedures to align with these guidelines, such as implementing mitigation strategies to consider potential for biases, ensuring human support as part of validation, monitoring the model for potential loss of effectiveness, and understanding data limitations.

Protecting information amid evolving AI and pending regulatory frameworks

Intelligence has evolved from statistical analysis to machine learning models across many different business areas, but it still requires the same level of regulatory rigor. Organizations have always and will continue to require strict validation embedded into standard operating procedures. The latest evolution in this industry, from paper to electronic to decentralized remote clinical trials increases variability in sourcing data. It also presents new challenges to clinical trial organizers.

Adding to that challenge, the sheer volume of data collected through clinical trials in the past five to ten years has exploded. In fact, our research shows that clinical studies capture three times as much data as they did a decade ago from increasingly diverse endpoints. As AI and ML are leveraged to streamline this information analysis, there are a few key considerations that organizations must consider in protecting their data.

As organizations implement new technologies, they must take proactive measures to accommodate any current or impending regulatory ordinance. Not only to remain in compliance but also to protect sensitive information and ensure optimal results.

Questions that must be considered to prepare for regulations and support the safeguarding of data include:

  • What are the origins of the data?
  • What parts of the data are essential?
  • Is there a clear data workflow or pathway?
  • What models are being used? Is the model monitored consistently?
  • Are the results being validated?

Measures to face stricter guidelines

There are several measures organizations can follow to protect patient information and prepare for updated regulations that will provide stricter guidelines around the use of AI and ML in clinical trials.

  • Cloud-based platform: A cloud-based data platform centralizes and standardizes clinical trial data and ensures consistency in facilitating AI/ML analysis. The structure of clinical trial data also plays a role in safe and effective use of AI/ML. Promoting data harmonization and continuous data quality checks encourages accurate analysis and identifies anomalies in clinical trial data.
  • Data governance: With data governance practices, such as data lineage and provenance tracking, organizations can monitor the origin and transformation of data as it passes through the AI/ML workflow. This provides a standardized procedure and supports transparency and auditability across clinical trial analysis.
  • Automation: By automating data processing, clinical trial organizations can minimize manual errors and optimize time usage to improve data management. Creating well-defined operating procedures for data analysis, data handling and model development also supports consistent management of data, allowing organizations to adhere to regulatory guidelines.
  • Human in the loop: Safe data management while using AI/ML requires collaboration between humans and intelligence. One concept employed by our organization is the idea of a “human-in the loop,” ensuring that there is an active human being in the data analysis process, to provide sign off, examine regulations and monitor for discrepancies. The interaction between humans and intelligence should not be an afterthought. There should be active integration and placement of human beings in all AI/ML models.

As AI and ML expand the possibilities of clinical trials and accelerate analysis and information gathering, organizations must recall the importance of patient safety. Embracing principles of “human-in-the-loop,” and prioritizing cloud-based integrations, data governance, automation and transparency are crucial to providing regulatory standards that allow organizations to harness the full potential of evolving technologies. The success of AI/ML in clinical trials hinges on balancing innovation with prioritized safeguarding of patient well-being.

Gary Shorter, IQVIA Gary Shorter is head of artificial intelligence at IQVIA.