By Dr. Susan Shelby, vice president of global clinical operations at Biomedical Systems.

Susan_Shelby

 

Getting a new drug to market is tough. Research performed by the Tufts Center for the Study of Drug Development found the average new chemical entity requires a $2.6 billion investment to make it from laboratory to market. Add in the 10-plus years a new chemical entity spends in research and development, plus the attrition rate during that time, and it’s easy to see why so few new drugs gain approval each year.

As every study sponsor and pharmaceutical executive knows, the majority of resource costs required to advance a medicine to market are incurred during the clinical research phase. For each new trial, those first few weeks of vendor engagement and collaborative study preparation are crucial: They set the stage for what follows.

To begin efficiently, sponsors must identify and address remaining protocol questions, define data-collection realities, and establish effective vendor workflows. One haphazardly planned study can derail regulatory approval, waste millions of dollars, and delay life-saving treatment for those who need it. Meanwhile, the clock is ticking on the patent.

The rapid study startup process, however, can ensure studies are finely tuned for scientific validity and the rigors of the trial process. When developing a cadence and sense of momentum during study startup, the operational team sets the tone for the entire study, creating the expectation for an efficient, rewarding experience for investigators and patients alike.

What’s Rapid Study Startup?

Rapid study startup is a collaborative trial method that emphasizes stringent procedural standardization, proactive planning, and exceptional communication during the first few weeks of a clinical research project. It assumes all parties work in parallel to accomplish interdependent, time-sensitive milestones. The masterful implementation of each milestone at the exact right time creates one moment of bright possibility in which to test the new compound.

While most clinicians understand the importance of adhering to strict guidelines, the rapid study startup approach pressure-tests the clinical investigation plan, beginning with the clinical protocol and data collection plan. This method relies upon the importance of robust, clear communication of the clinical trial “blueprint” as the fundamental basis for scientifically valid, efficient study outcomes. To accomplish the highly synchronized feat of rapid startup, each component must be in its final form at the precise moment it’s needed. This point cannot be overemphasized, particularly among teams who may find it difficult to resist the urge to constantly improve deliverables or change the blueprint.

The most important items required are universally familiar: the final clinical study synopsis, the final clinical research protocol (including the informed consent form template and updated investigator brochure), the investigative site list (by country, considering known lead times for approvals), and the data management plan, electronic case report form, and clear strategy for collecting all clinical site data.

The rapid study startup approach avoids conflicts between study investigators by resolving them in advance of the investigator meeting, minimizes regional differences between data sets, and ensures that the clinical startup process doesn’t need to halt for gathering additional resources.

Four Tenets of Rapid Study Startup

While every clinical trial has unique study populations and tools, its own mix of personalities, and unexpected drug-specific challenges, the rapid study startup procedure itself doesn’t change. It emphasizes these four process pillars:

1. Finalize the clinical study protocol to reduce variation — then stop making changes.

The first element of a successful study is a well-defined, well-articulated study protocol. Unfortunately, study sponsors often approach us with protocols that lack specificity. While the sponsor might understand exactly how he wants involved parties to implement RECIST 1.1 criteria for stage 3 a/b non-small cell lung cancer, the protocol doesn’t always convey that clearly. How will the target lesions be measured? Will affected lymph nodes be included? What level of training, variability, and experience will expert radiologist over-readers have? Will the randomization be by subject for treatment only, or will it include blinding the study visit?

If the study protocol’s details contain any gaps, sites will find a way to incorrectly perform the study or to introduce unwanted variability. To minimize scientific battles at the investigator meeting, run the protocol past both the internal study team and a group of experienced physicians in the therapeutic areas who have recently done a similar trial (contact research organizations are a great source of contacts for this). Anticipate institutional differences in patient care, such as anesthesia formulations, and standardize necessary and relevant components of the study. Then stop tinkering with the protocol. Lock it down and add more microscopic details to other study documents, such as procedure manuals, case report form guidelines, and so forth.

2. Consider data collection and analysis: Do you need a specialized core lab and centralized data analysis?

Once the study’s protocol has been clarified and finalized, the next step is to decide how to collect and analyze the study data. Determining a single, explicitly defined approach to the study data measurements is vital because differing interpretations of study protocol endpoints might not become clear until the various testing centers have submitted their data. We all want to avoid mid-study amendments and database surprises.

Two variables affect “treatment effect” — the actual activity of the drug treatment in the study population and how well you measure that signal. If the dose has been carefully studied against the disease, including tolerability in the study population, the activity of the drug is no longer truly a variable. The one remaining variable that you can influence, then, is how well you measure the actual drug signal. This signal, however, is profoundly influenced by the amount of variability in the dataset. This data variability reflects a sum of both introduced experimental errors (e.g., differences in site equipment or in resolution of the measurement) as well as genuine interpatient and intrapatient variability.

No matter how well the study protocol is written, there will undoubtedly be minute differences in how testing is conducted across various sites. To avoid introducing additional variability, contract a single, central laboratory when possible. Using the same highly trained, experienced reader to analyze the complete data series for each trial subject — using, if possible, the same well-defined protocol and same model of medical equipment — is a proven approach for reducing variability. The reduction in data variability available with this approach may be the single most powerful tool a clinical researcher has at her disposal.

It’s important to choose the core laboratory on the basis of its specialty. If a study is examining the effect of inhaled albuterol on bronchoconstriction in children, then a laboratory that specializes in pediatric pulmonology is the best choice because it will immediately recognize which study sites tend to submit poor data on the subject and which tend to excel in that area. It will also emphasize the importance of including date of birth in the spirometry assessments, which are based upon comparisons with percent predicted values, which are highly variable in growing patient populations (patients under the age of 21). This approach requires intricate knowledge of how to collect this data within existing data privacy rules.

Choose wisely, and be sure to keep communication channels open. A specialized core laboratory can keep a study on track when less-experienced labs might not recognize or respond to poor data when they process it.

3. Define site training requirements to improve protocol implementation.

After creating a foolproof study protocol, choosing the data collection strategy, anticipating areas of variability and addressing them, think through the study coordinator and other technician requirements. Anyone who is patient-facing and collecting data can introduce variability.

These site training requirements and procedure guides, such as the aforementioned study protocols, must be as explicit as possible yet remain within clinical practice. Remember, clinical testers cannot read minds: The collected data won’t withstand scientific muster if the implementation of data requirements is left to chance.

We have a checklist we communicate to testing sites for pulmonary and cardiac studies, live training at investigator meetings, and site procedure manuals. We also pre-certify each technician who is collecting data and review it for quality prior to allowing the technicians to collect patient data. If turnover in site staff occurs during the study, new staff members are trained and certified in a uniform way.

Then there’s the business of recording results. Which brand of spirometer or electrocardiogram should be used? Should it be calibrated? How many decimal places should be included in spirometry measurements? Determining answers to detailed data collection questions like these before the investigator meeting and site training is the key to driving efficiency and maintaining study momentum.

Discrepancies between decimal points and length between expirations might seem trivial, but remember: Slight variances between seemingly inconsequential variables can be additive in nature. Each measurement should be carefully considered and optimized if it impacts the safety or efficacy of the study’s results.

4. Finalize the investigative site, country list, and equipment.

After devising a clear, regimented approach to the study’s user requirements, it’s time to define the investigative sites and to order any equipment. While it’s an industry standard to keep this list as a living document, try to lock it down during study startup until you’re through the first patient, first visit. Otherwise, vendors will be juggling timelines, logistics departments, and development priorities to accommodate last-minute changes. It’s fine to add sites; just wait until the major milestones are met at the country level if possible. Your vendors will thank you.

Keep in mind that ordering equipment can easily be hamstrung by various nations’ import-export laws: Some countries require new equipment to be used if it’s imported. With all the other variables involved in a controlled study, few consider whether Argentina might accept a shipment of used ECGs, but I’ve seen import issues delay studies and balloon costs. These laws are also constantly changing.

Considering how closely an almost infinite set of variables must be controlled, it’s really quite remarkable that researchers have successfully tested so many drugs. The financial and time-related setbacks of unforeseen challenges or poorly written protocols can be enough to derail any study. However, keeping an eye on those issues that may impact data variability and subject safety can help a clinical researcher maintain a sense of perspective.

That’s why the rapid study startup approach is so important: It ensures studies are rigorously structured for success and for rapid implementation. Every study has unexpected speed bumps, so lock down your finalized study documents and your data collection approach to ensure they don’t become catastrophic.

 

Dr. Susan Shelby is the vice president of global clinical operations at Biomedical Systems. She has been a global senior clinical director and global project team leader in pharma and biotech and has a total of 20 years of clinical research and development experience. Dr. Shelby also served on an executive committee for an FDA private-public partnership, focused on innovating phase II-III clinical trial designs to enhance the treatment effect size, and reduce the placebo effect.