For the twelfth year, Med Ad News has chosen new Pharmaceutical Marketing Ventures to Watch that could change the way pharmaceutical products are marketed and sold.

 

This past October, the Med Ad News staff began the team’s annual search for the future of pharmaceutical marketing. We sought out young companies, spin-offs, offerings, and ventures to profile that are providing the most innovative and interesting products, services, or marketing opportunities to pharma companies and the healthcare community. This year’s three profilees are all centered around bleeding-edge technology – a developer of extraordinary healthcare-focused augmented and virtual reality presentations, a creator of artificial intelligence-driven conversational “bots” for pharmaceutical brands, and a research company that has automated and exponentially accelerated the process of collecting insights from physicians and patients. Here are Med Ad News’ newest Pharmaceutical Marketing Ventures to Watch.

 

Confideo Labs

Confideo’s augmented, mixed, and virtual reality presentations can interactively bring presenters and viewers inside the body or into the perspective of a patient.

Confideo Labs designs, develops and deploys immersive healthcare media, such as virtual reality, augmented reality and mixed reality, to help doctors and other healthcare stakeholders better learn about products and conditions through visually stimulating interactive experiences. The company’s extraordinary immersive tools for pharma brands, health systems and CME providers – like the live mixed-reality stage presentation featuring a journey through a giant virtual heart that the company created for a Medscape Education symposium audience – have won a pile of awards, including three Tellys, three Communicator Awards, two Daveys and three W3 Awards in 2019 alone. In addition to Medscape the company’s clients include big pharma names such as Ferring, Bausch Health, AstraZeneca and Boehringer Ingelheim, for which Confideo recently created an award-winning 360-degree immersive voyage into the lung to support Spiriva Respimat.

Confideo’s story began in 2014, when the company’s founder and CEO, Mike Marett, was introduced to one of the first next-generation consumer VR headset platforms. “Throughout my career I’ve always enjoyed identifying emerging marketing trends and determining ways to advance technology platforms that better engage and educate healthcare stakeholders,” Marett says. “In 2014, when I was introduced to the Oculus Rift development kit, I felt that we had come upon a tool that represented a major contextual shift for delivery of complex medical media and scientific content, and an unparalleled opportunity to tastefully evoke empathy and emotion to educate and inspire.”

So Marett partnered with Matt Irwin, who brought to the table design, development, programming and medical animation experience plus a degree in biomedical engineering, and established Confideo (based on the Latin roots for “trust” and “confidence”). The pair spent their first year in business growing the team and honing skills while doing the “speaker series” at conferences and other special events, educating the marketplace on the imminent immersive media revolution that was primed to follow.

“We advocated the category and described the forthcoming movement, which was being corroborated at the time by major acquisitions and pivots by large companies across industries, while lobbying prospective pharma brand customers to pilot programs that would further validate our theories,” Marett says. “We felt strongly that beautifully rendered content, delivered contextually as an immersive experience, purposefully designed for visual learners, would dramatically improve recall and retention. Further, specifically as it relates to VR, we could craft programs on canvases with no borders, which when coupled with headsets that function as ‘sensory deprivation tanks,’ we could guarantee for the first time ever the delivery of core content with express focus and complete attention – and no distractions, not to mention the prolonged engagement – all of which are game changers and so pivotal for HCP education.”

Once the ball started rolling, Confideo began stacking up firsts, some of which seem little short of Star Trek. “We were first to activate national field representatives with the VR Detail, first to produce a mixed-media VR video, first to deploy mixed reality on stage in a live symposium, and most recently, first to utilize volumetric video to deploy holographic representations of patients (and doctors) via augmented reality to amplify core client content as part of a global field representative integration,” Marett told Med Ad News. “In order to achieve this, we filmed actors portraying patients using a highly specialized capture process known as volumetric video capture. This process comprises 100-plus cameras capturing content simultaneously. The result enables us to merge the media, layer it with interactive digital elements, and deploy as vibrant and interactive augmented reality. This particular project has been strategically rolled out globally, meaning that representatives at our client are using their iPads to trigger a fully dimensionalized 3D hologram that magically comes to life on the table top as part of a series of patient profiles meant to educate, engage, and inspire customers.”

As any good pharma salesperson already knows, doctors are visual learners. So they’ve been gobbling up Confideo’s multidimensional offerings everywhere they can – at conferences, in the office, or even at home on a computer or mobile device.

“[Doctors] see immersive multimedia contextually as an incredibly valuable way to deepen their education and aid clinical decision-making,” Marett says. “VR helps them better understand spatial relationships when exploring anatomy and physiology at cellular scale levels in ways not possible before. VR can also viscerally and believably place practitioners in the skin or shoes of patients in ways that other mediums simply can’t – we call it the ‘empathy engine.’ From our experience, HCPs consistently welcome opportunities to engage VR, AR or MR, across platforms. And by immersing doctors in an experience that enables them to take a voyage ‘inside’ and see with their own eyes, we are giving life to otherwise typically didactic presentations.”

How does a brand get from start to an immersive MR/VR presentation? With pharma marketers, once Confideo is provided with the strategic remits from a brand, the company offers creative counsel and guidance on optimal content style and technology platform, with consideration for audience and channel opportunities, based on its experience, expertise, and the changing technology and new media landscape. The company’s team aligns with clients on immersive healthcare media plans, which typically feature a variety of complex content modules. “This means that most strategies deploy with a variety of immersive media modules, or a plan that includes a predictable cadence for when subsequent VR or AR chapters will be added to a master index,” Marett says. “We also determine ways to amortize 3D content, whether digital video or mixed, so that it can be repurposed across additional digital channels.”

For non-pharma partners such as Medscape, with whom Confideo launched Medscape 360, a new suite of immersive learning solutions, the company follows similar processes. “(With Medscape) we tapped VR and AR, but mostly MR to create a disruptive collection of programs that have been used around the world to improve education and enhance both teaching and learning experience for healthcare professionals,” Marett says. “Specifically the live mixed-reality stage presentation has become popular and very successful, really revolutionizing live presentations at symposiums so that CME can achieve its ultimate goal of supporting doctors in improving patient care.”

Print does not do justice to the experience of one of Confideo’s creations. For the World Conference of Gastroenterology in Orlando back in 2017, the company built and debuted the world’s first mixed-reality CME stage presentation in which the presenter, Dr. Brennan Spiegel, equipped with a head-mounted VR display and hand controllers, was fully immersed in and interacting with 3D content on stage, enabling a comprehensive and dimensional exploration of the curriculum. Dr. Spiegel could step in and around content – explore layers of gross anatomy, for example, or interact with data sets and charts. Opposite the presenter was a camera system pointed at the presenter and screen, capturing live-action video that was merged in real-time with the VR media, which created the mixed-reality presentation that was also projected to the audience on more screens around the stage. So audience members could view either the first-person VR view or third-person composite perspectives of the presenter, not to mention follow the program or submit questions to faculty on iPads. And this was two years ago.

“We are hyper-focused on quality, exceeding client expectations, and expanding the amazing trust-based relationships that we’ve forged with our wonderful clients and partners,” Marett says. “As a private, boot-strapped business we prioritize service and feel exclusively accountable to our clients and their success. With this in mind, our team is committed to always remaining on the bleeding edge of innovation and consistently elevating delivery of immersive healthcare media. We started building programs for bulky connected VR headsets that afforded rate limiting channel applications, but today we are deploying volumetric holograms as AR at global scale. The future is exciting and we plan to help pharma embrace new media and immersive experiences as they evolve, in ways that bring express utility and direct value to stakeholders.”

 

ConversationHealth

ConversationHealth’s bots are able to understand user questions through the
use of neuro-linguistic programming, natural language understanding, and machine learning.

ConversationHealth is reimagining the way that healthcare companies and their brands engage with patients and physicians through the use of conversational artificial intelligence. Over the past two years, the company has partnered with 16 global life sciences companies to deploy marketing, sales, and medical affairs “bots” on its proprietary text and voice SaaS platform. This platform is able to understand user questions through the use of neuro-linguistic programming, natural language understanding and machine learning, while (human) conversation architects ensure accuracy and engagement within the conversation flows, creating unique taxonomies that reflect understanding of each client’s medication and condition lexicon. The platform has built-in compliance and reporting capabilities that meet the stringent requirements of healthcare communications, and can be used across multiple interfaces. It also integrates with all major software providers in the life sciences spaces, including Veeva and Salesforce. According to company leaders, to date ConversationHealth has delivered more than 6 million conversations in multiple languages and geographies across 12 therapy areas.

One of the questions that Med Ad News often asks of new ventures is, “How would you explain your offering to a typical 17-year-old?” In the case of ConversationHealth, this was not a difficult question to answer, since one of the founders of the company was, in fact, 17 years old, though far from typical.

“The roots of ConversationHealth grew out of my 17-year-old daughter’s startup, EmojiHealth, that she ideated and developed three years ago,” says CEO Dr. John Reeves. “Like all patients, she experienced the frustration of Dr. Google (overwhelming lists of ‘results’ when she was looking for health information – unsure of what was accurate or relevant based on her age, location, and previous medical history.) As a teen her first instinct wasn’t to build a website or an app, but instead a conversational bot that better aligned to how teens were engaging with each other, with content that was friction-free, inherently human, broken into memorable bite-sized and snackable pieces, and most importantly was always a consistent user experience.”

When Alexandra Reeves pitched her idea at the global Exponential Medicine conference, she promptly won the “One to Watch” Award, and before leaving the event had started discussions with a global pharma company that would turn out to be her first client, looking at how teens with epilepsy could be better informed and supported through bots. “Clearly she was on to something big,” her father says. “Coming out of Exponential Medicine it became clear that there was an opportunity to leverage conversational AI to reimagine healthcare at the highest level – and ConversationHealth was born.”

Dad, unsurprisingly, was no stranger to digital health himself; he’d built digital health agencies, worked in healthcare publishing, and led a global digital health practice at McCann. So when Alexandra Reeves left to start her studies in engineering at Stanford (she’s now in her third year), he took over the company – though between exams “she still provides insights and direction on where conversational UX is going,” Dr. Reeves says.

What kinds of conversations does ConversationHealth offer? The exact sorts of text-based conversations that we’ve all gotten used to having on our phones, except with a well-trained artificial intelligence on the other end of the line. “Consumers, patients, and HCPs can have a conversation with a brand or company through text, just as they would text a ‘human’ friend or colleague,” Dr. Reeves told Med Ad News. “Just as human text interactions are sometimes initiated by you, they can also be initiated by your friend – in this case the pull-push opportunity available to brands inherent in bots.”

Of course, when someone is interacting with a bot, they are actually interacting with a virtual human powered by AI. “This means that customers can engage 24/7 at their moment of need – and that the interactions can be immediate, expert, empathetic, and personalized,” Dr. Reeves says. “Of course it is important to let the user know that they are engaging with a bot and not a human … but this has the positive effect that patients don’t feel embarrassed, or judged, and so ask questions that they might never ask another human, even their most trusted friends.”

For brands the journey to actually deploying a bot in market can be surprisingly brief. According to Dr. Reeves, ConversationHealth typically has working prototypes up and running within three months, and commercial solutions online three to six months after that.

“Our clients have clear business challenges they want to address through their bots –transforming their med info model, or optimizing their existing digital touch points, or reimagining their patient support programs,” he says. “Of course it’s critical to decide which conversational model they want to leverage – is it text or voice – and where they want to deploy – web chat, a messaging app, a conversational banner ad, or even a digital human or avatar.”

All that said, Dr. Reeves emphasizes that conversational AI should be designed to complement existing programs and not completely eliminate humans.

“So for us, some of the heavy lifting is understanding the portfolio of ‘conversations’ that need to be designed to respond to customer needs and questions,” Dr. Reeves explains. “Our platform allows us to predict this library based on a number of inputs, including who the user is (consumer, patient, caregiver, primary care, specialist, et cetera); whether the bot is being designed to address questions about the product, the condition, or both; and whether the condition is acute or chronic – and what the key adherence challenges might be.”

Once that portfolio of conversations is understood, the ConversationHealth UX team of medical professionals, writers and designers goes to work creating structured conversational flows that can respond to users’ requests, typically providing brief, efficient information followed by links to supporting assets should the user want to dive deeper. All of the conversations go through MLR to ensure that every interaction is accurate, on message, and on label.

“Once the portfolio of conversations are written, they are loaded into our SaaS platform and the bot is trained so that it can understand the potentially millions of user queries and then respond with the appropriate pre-written conversation,” Dr. Reeves says. “This training of the NLP engine is a key part of the AI – and is especially complex in the healthcare vertical where the language of products and medical terms is nuanced. For example, the word ‘start’ or ‘switch’ means something completely different than in normal communication. And of course the AI has to accurately identify and manage adverse events!”

What does all this look like from the user’s perspective? Dr. Reeves offers a few use cases.

“First, an HCP in an office is looking for information about off-label use of an oncology drug. The HCP can simply ask Alexa – and the med affairs skill designed by that brand can provide an immediate answer based on the most current assets (like approved medical letters). This is a multitasking experience – and should feel as though you have that brand (or franchise) MSL sitting in your office – available when you need them – but without the friction of calling and getting into a queue.

“Second, an HCP in exam room is looking for directions on how to switch a patient to a new brand. The HCP can ‘text’ that brand’s bot and immediately get this information, with the bot being able to layer on additional information that might be relevant at that moment, such as critical patient onboarding instructions.

“What is exciting about this is that bots elevate pharma to being a real time consultant to the HCP in their clinical day, always available to answer both simple or complex question,” Dr. Reeves says. “We believe that this evolution from ‘search’ to ‘ask’ – with the inherent benefits of immediacy and accuracy overcoming the challenges of human to human interaction, which although incredibly valuable can rarely be available when needed.”

ConversationHealth bots can also be used in the context of clinical trials. “Bots are a great way to automate – or complement – existing patient support programs, allowing patients to ask questions of a nurse persona bot 24/7, as well as providing the brand with the ability to initiate conversations at relevant points along the patient journey. This translates to the clinical trial participant experience, where the bot can ensure ongoing interaction that supports trial protocol adherence and data acquisition through a more personal channel.”

What is next? Like many artificial intelligence tools, ConversationHealth’s underlying engine has barely scratched the surface of what might become possible in the future. “In five years we will see text, voice, and avatar-based bots that feel completely human, and that engage in conversations that users know are based on the most current, expert information,” Dr. Reeves says. “These interactions will feel completely ‘normal’ and they will leverage an expanding set of capabilities – including sentiment and literary analysis – meaning that they will be personalized and meet both the scientific and emotional needs of the moment.”

Also, the user journey will be better understood. “We are learning more and more every day about what patients want to know at what points in their journey,” Dr. Reeves told Med Ad News. “Through AI we will be able to predict what conversations we need to initiate at what time along each individual patient’s journey. We will also know what ‘nudges’ will be required, both to ensure medication adherence and to ensure patients follow their exercise, nutrition, sleep, and mental health plans. And bots keep getting better at aggregating wearable, device and self-reported data, and not just presenting this back visually but also being able to have a conversational exchange that provides feedback, direction and rewards.”

 

inVibe Labs

inVibe’s voice-response research platform allows brands and agencies to reach and gain insight from patients and physicians across hundreds of conditions and specialties in less than 24 hours.

inVibe Labs is the world’s first and only voice-response research platform created exclusively for the healthcare industry. The company helps marketing agencies, drug manufacturers and other industry stakeholders get quantitative and qualitative insights from physicians, patients and payers. inVibe is able to reach patients and physicians across hundreds of health conditions and specialties, all within 24 hours, and responses are processed by a hefty machine learning analytics engine for an additional layer of insight. This approach has already garnered plenty of attention; inVibe counts among the company’s clients pharma heavyweights like Pfizer, Merck and Johnson & Johnson, as well as agency leaders such as FCB Health, Publicis Health, and McCann Health.

“inVibe lets companies listen to what their customers have to say about their products without having to talk to them directly, and then its technology can analyze their voice to understand how they truly feel when they are talking,” says company founder Fabio Gratton. “In other words, inVibe can measure whether someone is really excited, very upset, or mildly frustrated. By understanding the underlying emotions, companies are able to get a more accurate picture about how people feel about their products, beyond just what they’re saying. Altogether, the platform helps companies collect valuable information that helps them make better decisions, faster.”

The idea for inVibe grew out of a realization that Gratton had while trying to support a stricken colleague. “Someone at my previous company was diagnosed with cancer, and I soon found myself struggling to provide them the emotional support I wanted to give them,” Gratton says. “Direct contact was impossible. A ‘get well’ card felt woefully inadequate. And clogging his voicemail was just rude. His friends and colleagues shared in my frustration.”

So Gratton had an idea: create an app that would allow users to invite friends on Facebook to leave voice messages in a communal mailbox, which would be delivered after a specified number of days. “We built a rudimentary version, and it worked. I remember how happy he was to hear all our voices while he was undergoing chemo.”

Then, a few years later, an industry colleague approached Gratton with a challenge he was facing in conducting qualitative market research at scale with patient influencers. “And it just seemed so obvious that an automated voice capture solution similar to what inspired that cancer app would be the ideal solution,” he says. “So we built what is now inVibe.”

The venture’s original goal was to enable clients to conduct both small- and large-scale voice interviews in less than 24 hours. “But by conducting these interviews we had all of this data from the voice signal – pitch, tone, and the frequency of how people are talking,” Gratton says. So the inVibe team developed an algorithm that could deconstruct all of these elements to better understand what a person actually meant and how they felt when they were speaking. After that, the next step was to utilize social linguists to examine the meaning of the language and words people use as well.

inVibe’s breakthrough moment, Gratton believes, “was when we realized that we were a ‘listening company.’ For years we described ourselves as a ‘voice company,’ partly because voice-tech was hot, but also because that’s exactly what we did – we captured voice.” Then one day Gratton was reading Snap Inc.’s S1 as they prepared to go public, and when Snap’s CEO Evan Spiegel described his vanishing message company as a ‘camera company’ it just hit him. “It was so surprising, disorienting, yet it so perfectly described how Snap was completely redefining the camera experience for a new generation. I felt like that’s what we were doing – completely changing the way people listen to their customers – so now suddenly calling ourselves a voice company felt as uninspired as Snap calling itself a social network. At that moment I knew that how we described ourselves needed to do much more than highlight our technology, it needed to communicate our core mission, which is to ensure that all people are given a chance to be heard.”

inVibe’s ability to reach targeted physicians and patients quickly and on their own terms is impressive in its own right. But the company’s real differentiating characteristic is its ability to read between the lines of the responses – or, as company leaders say, “to uncover meaningful language patterns hidden in plain sight.” The company’s machine learning technology and speech-emotion algorithms are able to analyze spoken words in combination with pauses, sighs, tones, and pitches, to yield actionable insights. To achieve this, inVibe complements its traditional market research team with expert sociolinguists, enabling them to work together on a proprietary platform that allows easy searching, tagging and measurement of all aspects of language.

“Extracting the true meaning behind feedback provided by patients and physicians is critical to identifying and responding to market needs,” Gratton says. “By harnessing the most valuable resource of all – the emotions, thoughts, and beliefs that truly drive behavior – inVibe enables healthcare companies to conduct quantitative and qualitative research studies more efficiently and cost-effectively than ever before.”

How does a client get on board? “It’s really simple, actually,” Gratton explained to Med Ad News. “We ask clients who they want to talk to, and what they want to learn, and then we help put together a complete plan that includes everything from recruitment to analysis. We typically design the qualitative questionnaire that will power the virtual interviews, as we have deep experience on how to ask questions that will provoke measurable emotional responses. Once the client approves that, we’re off and running – and all the client has to do is sit back and wait.” Because inVibe’s methodology removes much of the friction from the research process, the company is able to field entire studies in just a few days. Clients can start listening immediately in an online dashboard inVibe provides, and see the high-level emotion scores derived from speech. Within a few more days, the company provides a more comprehensive report that overlays a linguistic analysis conducted by in-house experts over the machine-learning acoustic data – providing actionable insights based on the client’s specific project objectives.

That kind of speed requires a network of patients and HCPs already in place, and ways to reach them quickly and efficiently. To achieve that, inVibe had to upend the industry’s usual model for research. “When we first started building the platform, we realized that the research industry had a pretty antiquated approach to participant recruitment,” Gratton explains. “We found that most full-service agencies outsource all their recruitment to panel companies – and panel companies typically have an established database of people that have opted-in to participate in research. This approach is flawed on multiple levels – not only does it require a lot of manual coordination, it also means that if a specific panel company can’t find the right people, you need to go find another panel company that can. That adds unnecessary complexity and drags out the time-to-field research.”

Instead, inVibe took a different approach and decided to build a vetted “marketplace” comprising dozens of panel companies, and then bolstering that with custom, in-house recruitment capabilities.

“Then we leveraged technology to build integrations with our partners so that we could always meet the quota demands of our clients with the highest-quality participants, in the shortest amount of time,” Gratton says. As a result, it’s not uncommon for the company to field entire studies in less than 24 hours.

What does an interaction with an inVibe study look like to the participant?

“We set out from the start to make the experience as easy and delightful as possible,” Gratton says. “This meant removing the friction at every possible touchpoint. We don’t make them go to a physical location. We don’t need to schedule specific times for them to join a call. We don’t require them to download an app, or install any special kind of software on their computer. They are often invited via text message. They click a link where they are provided instructions and are exposed to the stimuli, and then, whenever it’s convenient for them, they simply click a button to record their responses by talking.

“Within minutes of validating their responses, they receive their honoraria. That’s it. The whole process takes no more than 10 to 15 minutes, and we’re able to collect a treasure trove of data because speaking is much easier than typing, and the voice layer provides much richer insights.”

What’s next for inVibe? Getting beyond English, first of all – but plenty more.

“We are currently working on a lot of exciting things,” Gratton told Med Ad News. “One of those is expanding into different languages other than English, setting us up not only for multicultural research, but also allowing us to expand into other countries.

“During the past few years, we have been building integrations so that we can help companies collect patient voice data for real-world evidence or patient-reported outcome studies. We have already conducted several of these, and we are continuing to expand to include algorithms that are able to do more than measure discrete emotions in the voice, but also to look for biomarkers associated with disease progression, therapy effectiveness, or even diagnosis.”