It’s clearly what’s next, but to take advantage of voice device interactions pharma will have to retune its content development and review processes.

 

By David Windhausen, executive VP at Intouch Group

 

Watching old episodes of Star Trek can be disappointing for the tech-savvy; the final frontier still seems so far off. Warp drive? Not there yet. The transporter? Nothing bigger than a particle. The replicator? Not yet. The holodeck? Big disappointment. The tricorder? Getting closer.

But voice interface computers? The latest statistics show that 30 percent of online search is already done by voice, and it might just be more than half by the end of next year. The Amazon Echo and its variants, Google Home … we are rapidly getting used to the idea of telling the TV to turn itself off or telling the lights to turn themselves on or asking absolutely no one when the next “Fast and the Furious” movie comes out or what the correct spelling of “homonym” is. Amazon has even given a shout out to Star Trek by making “computer” one of the default wake-up words for the Echo.

Those voice interface computers are getting smarter, too. Geordi can’t ask them what the correct matter-antimatter intermix rate is for warp seven just yet – but brands of all flavors are creating skills that offer users the opportunity to ask questions and interact in highly specialized environments and take advantage of advanced levels of machine learning and data collection. Those skills are edging closer and closer to real, live health care, too – Mindscape, a skill for the Echo, offers live and individually responsive talk therapy for the stressed, and MyPetDoc can tell you what might be wrong with your dog.

It doesn’t take much imagination to, well, imagine what might be next in the health care space for voice. “Alexa, track that I took my medication at 9:34pm.” “Alexa, renew my prescription.” “Alexa, when’s my next oncologist appointment?” “Alexa, how do I get to my oncologist appointment?” “Alexa, am I supposed to feel like this?” And the questions, and the answers, all tie back to databases and caregivers and intelligence that can sort out the significance of those questions and answers and move information and conclusions and suggestions to whomever ought to be seeing and acting on them. Short of devices attached to the body, it’d be hard to imagine any technology getting more emotional proximity to a patient than a voice interface device.

Have pharma brand managers taken advantage of this new potential pathway to patients?

Not really, not yet. Brands are asking about things like voice-enabled chatbots. And maybe the world will see a few voice-enabled pharma brand chatbots in the next year or two. But a voice-enabled brand chatbot on its own, while surely delightful and perhaps a strong candidate to win bright shiny awards for innovation, likely won’t offer much more to or for the brand than a web-based chatbot would.

Because voice isn’t just a tactic, just like a webpage or a social media presence or interactive content weren’t and aren’t. Voice, properly engaged, has the potential to be a highly personal mode of interaction within an entire connected brand and care experience. Not for every brand – but for a specialty brand, a Humira sort of brand, creating a voice skill for patient support services, placing it on top of robust and iterative AI, weaving it into an entire ecosystem including mobile apps, websites, call centers, the lot – well, for that brand voice could be game-changing. Plenty of specialty brands have already built mobile apps that offer patient education, tracking of symptoms or pain levels, reminders for the next dose or injection, all integrated and tied to underlying data sources and media. But the leap from a mobile app to a voice skill will bring those services and offerings that much closer to the patient, and make them that much easier for the patient to access.

So just move ‘em from mobile to voice. Easy, right?

Well, again, not really. To walk the last mile from “Build me a voice chatbot” to that truly integrated experience is going to force pharma to finally face one of its least tractable bugbears: the balkanization of content development.

All of you who’ve ever had to go through content approval with a pharma client know what I’m talking about. In our industry each media channel has its own well-trod path for validation and its own review teams and its own extra-special processes built up over years of back and forth between brand managers and the legal and regulatory department, each a bit like the film that appears on top of the queso dip from California Tortilla if you leave it out for too long. And each of those cheesy films is its own unique cheesy film; you’ve got to dip your chip in all of them if you want it to appear in multiple media platforms.

If ever there was a medium begging for the wholesale destruction of the “dip in cheesy film” model of content development and review, it’s voice interaction. Because voice interactions can and should be dynamic and unstructured interactions, more so than any other channel has been for us. Dynamic and unstructured interactions are going to demand the ability to reach across platforms and data sources to create and change and build appropriate responses in real time, to recognize who is asking a question and why, and find not just the right answer but the right path to convey the answer. Maybe the right response isn’t even a voice response; maybe it’s a text message or a video or even a phone call from a live person. And to be able to do that in any kind of manageable timeframe for the user, we must break down the separation by medium of our legal and regulatory processes for approving content.

Can it be done? Of course it can. I know, because we’ve done it with clients, on a small scale. When building these sorts of unstructured, cross-media experiences, we start by asking, “How might users want to engage? What might those engagements look like, whether through voice or chatbot or mobile app or (fill in medium here)? Who are the users, what questions will they ask, and what is the best pathway to a response, and how can we learn from all this?” We create use case scenarios that are not channel-specific, that explore as many pathways as possible. We ask, ask, ask – “If the interaction goes this way, what response should the system create? Does the system have the knowledge and training to respond to this question? What content or data source should be used in this circumstance? Can that content or those data sources be engaged across multiple channels?” At the end of it all, we have questions and we have content – not web content or video content or mobile content, just content – and what we’ve created is not a website or a video or an app but a multifarious interactive experience, with voice as the way in.

Creating experiences like that forces us to break out of the traditional pharma content cycle of build it, approve it, get it out there, move on to the next thing. Users might ask questions that the system can’t answer or respond to content in an unexpected way. So the AI lurking behind the experience – and the puppet masters lurking behind the AI – must constantly evaluate how well questions are being answered and what areas might require more data or retraining. That’s what the “intelligence” in “artificial intelligence” means, after all – the ability to learn and improve. And to live up to the potential of that kind of iterative learning and improving process, we need to tune our methods of review accordingly – otherwise, our experiences will always seem a bit slow, like the first generation of Siri.

Now, the transition to this sort of thinking won’t happen overnight. We as agency partners are going to have to ease our clients into it, to move them step by step, a little at a time, from the tried and true idea of discrete pieces of medium-specific content to a free-flowing, responsive paradigm. But that transition, I believe, is our great task as pharma marketers over the next three to five years. Those who successfully navigate it will be able to take full advantage of voice interface interactions and whatever unimagined mediums that might come after. Those who don’t – well, they’ll turn out a bit like Scotty in his “Star Trek: The Next Generation” cameo, relics watching the future fly past.