Why Digital Health Has Not (Yet) Transformed Pharmaceutical Drug Development

, ,

For the last several years, I’ve been arguing that digital health provides an important opportunity to improve drug development, for several reasons.

First, by providing greater insight into the patient’s actual experience of disease, these technologies can reveal important differentiating features of new therapeutics, or point out aspects of illness that new medicines ought to attack.

Second, by offering a richer readout of phenotype, digital health measurements can reveal important disease subgroups, perhaps defined by a unique underlying mechanism that can be targeted.  I’ve discussed this in detail recently, and won’t focus on this again here.

Digital health technologies can of course be helpful in a range of other ways, such as improving adherence, population analytics, clinical decision support, etc.

While some have hailed the adoption of digital health by pharma, that’s not my impression, at least on the R&D side.  In my view, it remains very much on the “innovation initiative” side of things, rather than a clear business need (like pharmacology); drug development companies may be dipping their toes in and celebrating their bravery, but at best they are interested – certainly not what I’d call “pig committed.”

There are two main challenges I’ve seen as I’ve discussed digital health with pharma R&D stakeholders.

The first issue is that pharma is exceptionally conservative, and old habits are hard to break – especially when the resistance to change can always be attributed to others – e.g. “management is wary” or “we’d like to but worry the FDA wouldn’t go for it.”

One manifestation of this is actually the very example I cited here, referencing a commentary Dr. Ethan Basch wrote for the New England Journal of Medicine.  While Basch asserts that understanding how an oncology medicine impacted a patient would permit a company to develop and market a more patient-friendly drug, even if it didn’t impact the traditional endpoint of overall survival, I’m aware of at least one example where an emerging next-gen oncology product fitting this exact criteria was spiked.  Although it was far better tolerated than first-generation products, it didn’t demonstrate an improvement in overall survival, and thus was felt to face significant reimbursement challenges, despite the fact that patients (had they a say in the matter) would clearly prefer it.

Of course one second-hand example hardly constitutes proof that patient-focused digital health technologies won’t impact drug development, but it does emphasize the mindset that many current drug developers bring to these discussions.

Another example of the conservative mindset is the anxiety around pursuing new endpoints that are more “real world,” or patient-focused.  Even though many pharma companies recognize that “established” endpoints for disease may not be especially relevant to patients, it takes a lot of courage – and effort – to pursue novel endpoints, which differ from what other drugs have used to obtain regulatory approval.  Pixar Films likes the quote “the new needs friends” – and nowhere is this more true than in pharma endpoint development.

To really appreciate how conservative many pharma companies are, consider that even transitioning from paper diaries to electronic data capture is often viewed as a very serious departure in the world of clinical studies.  A university physician recently described to me how, in his particular therapeutic area, companies refuse to switch to either better endpoints or even to discard paper diaries for symptom collection since previous drugs approved in this area all executed studies based on paper diaries and established endpoints, and while company researchers agree neither are optimal, they are also unwilling to switch.

The second major challenge faced by digital health as it tries to gain acceptance in clinical trials is in some ways the converse of reason one: the evidence base around digital health is incredibly shallow at this point, and generally lacks the rigor serious clinical studies require.  Often, this critique dismissed as fuddyduddyness on the part of establishment scientists, who are said to lack the imagination to try new approaches.

Maybe so, but here’s the thing: the history of medicine is replete with examples of technologies and approaches that make intuitive sense but happen not to work – the use of a particular type of antiarrhythmic to prevent sudden cardiac death after heart attacks, for example, or stem cell transplants to support breast cancer therapy.

Building a robust evidence base is both necessary and challenging; in many ways, our experience with biomarkers should provide a cautionary tale.  Initially, the hope was that molecular indicators would provide handy, easily-accessible tools to guide drug development, providing early indications of what therapies were working.  In some cases this has proven true, but the larger experience is that developing robust biomarkers turned out to be far more difficult and complicated than most anticipated (and many still underestimate the challenges here).  I’d urge readers to track down any Mendelspod podcast featuring Anna Barker (eg here) to start to appreciate the complexities.

My sense is that digital health – some have even started to use the term digital biomarkers – may be around ten years behind where we are with molecular biomarkers.  Hopefully, we can accelerate adoption of digital health by applying the hard-earned lessons from molecular biomarker development – including the need for rigorous data collection and a tough-minded analysis of the utility.

One distinct advantage digital health may have over other approaches is that the technology is in some ways more “relatable” than biomarkers.  We are all accustomed to wearables, many of us monitor some aspect of our phenotype, and are increasingly insisting our physicians pay attention as well.  Many hospitals are looking for ways to incorporate wearable data, and many academic physicians are beginning to apply their scientific expertise both to evaluate existing technologies critically, and to develop new approaches – Dr. Brennan Spiegel of Cedars-Sinai is a conspicuous example (see here).  (Disclosure: I collaborated with Dr. Spiegel in a previous role but have no current scientific or business relationship with either Dr. Spiegel or Cedars-Sinai.)

I’m excited to get to the point where digital health technologies are routinely incorporated into clinical drug development, as I see this as profoundly beneficial for patients.  We’re definitely not there yet, but I hope we’re beginning to get close.

Source: Forbes