The industry impact of President Biden’s Executive Order

White House

The industry impact of President Biden’s Executive Order

By Will Reese, Evoke, an Inizio company, and Matt Lewis, Inizio Medical

This is a critical time in the evolution of technology-driven innovation in the field of pharma and life sciences. In the same way that innovations such as the steam engine and electricity transformed our world, AI is set to be one of the defining innovations of our lifetime. Harnessing the power of these tools in a safe, ethical, and responsible way is crucial to embedding the use of technology successfully into everyday business practices.

A key flashpoint in this debate has been the publication of President Biden’s Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence”. The document starkly illustrates the need to tackle some of the potential risks and security concerns associated with AI technologies. The Executive Order makes several recommendations for the healthcare sector, including the improving quality of healthcare data, using AI to improve the safety of healthcare workers and patients, health equity issues, and the use of AI in local settings. Perhaps one of the most significant recommendations in the healthcare space is the creation of a strategic plan for AI use in health and human services. There is also valuable content on using AI in R&D for bio-design tools and nucleic acid sequencing for companies in the life science sector. It also encourages trade groups and industry bodies to introduce greater AI self-governance to improve their quality and standards.

From a corporate governance perspective, there is currently a lack of regulatory and legislative oversight in the field of AI. Other legislative and regulatory bodies have started to make some progress in this area, the European Union announced the forthcoming AI Act, which has been delayed due to ongoing political disagreements over the application of the technology to biometric surveillance and the wider use of AI systems. This development was followed by the Cyberspace Administration of China publishing official guidelines on the use of generative AI. Although there is no such equivalent in the United States at the time of writing, the Executive Order sets out a clear roadmap and direction of travel that was previously lacking for the U.S. This federal-level guidance also avoids the potential pitfalls of U.S. states publishing individual AI plans, which would create a highly complex landscape for businesses to navigate. Tracking and monitoring strategic plan updates following the Executive Order’s publication will be critical to assessing its ongoing implications, the U.S. Department of Health and Human Services will play a pivotal role in facilitating this. Although the Executive Order sets out a broad national framework, there needs to be higher-level consideration to translate its principles into sector-specific insights. Organizations need to draft robust AI charters and frame their standards for ethics, compliance, and transparency, to create a foundation from which to evolve. One of the core ethical issues in the use of AI in pharma and health care is the ability to verify that information is accurate and from a reputable source.

Ethics and AI: calling out fake news online

A crucial aspect of this is provenance and the ability to distinguish between synthetic AI-produced content and content that is genuinely human and authentic. For example, if an ad appeared proclaiming the health benefits of a new treatment or healthcare product, the veracity of the source would need to be established, so that the audience can trust that it is based on reliable information. Healthcare communications may in the future require content to have a watermark to verify that it is from a human source and not a robot, providing a clear way to differentiate between trustworthy and untrustworthy sources. In the context of the communication industry, AI has the ability to enhance human creativity, providing the space and insights for innovative idea generation, stories, and content, but this must be matched with robust, accurate information to be credible. Striking an appropriate balance between augmentation and automation is key for the symbiosis of human intellect with technological advancement. Achieving this effectively requires considering the ethical relationship between the datasets that underpin AI technology and human decision-making. Organizations should proactively examine how they want to approach the transparency of the data they use and the outputs they create.

Health equity and AI data

There are significant health equity issues created by the limitations and structural biases in the datasets AI models may be trained on. This has implications for AI’s ability to provide an informed, accurate, and reliable guide to facilitate human judgment.  Poorly trained AI models based on inequitable data are like a teacher at a poorly funded school with out-of-date textbooks. While the teacher may be dedicated to educating their students, they lack the appropriate learning materials to provide a comprehensive and accurate understanding of the subject matter. Upgrading the learning materials is needed to correct these blind spots and change the outcomes.

Most datasets available online used for AI models are based on work published by white male European authors and are not indicative of the wider socio-economic demographics of society. That’s why auditing these AI data sets by including human oversight in the early design stages is critical.

It is important for communication professionals to help clients in the pharma and lifescience space interpret and understand data models effectively to understand the implications for their clinical work. For instance, this would need to be considered when applying AI methodologies to clinical trial data for cancer patients or people with neuropsychiatric conditions, as the underlying dataset may not be sufficiently nuanced to give due consideration to those particular demographics. One of the key things needed to tackle this is to create more diverse teams working in AI and building relevant datasets in analytics teams.

The inclusion of greater end-to-end diversity as a systemic feature of this process is essential to improving the quality of data insight algorithms. This will help give pharma and lifescience companies the tools they need to deliver better and more accurate results.


Navigating these challenges to ensure businesses’ approach to AI is robust, ethical, and secure is multifaceted, and an effective strategy is key. There are clear risks associated with AI and a need to ensure that AI content is presented in an ethically transparent way and based on equitable data sets with appropriate safeguards. However, this should not detract from the truly transformative potential of AI to radically change the way we work, if harnessed correctly. By carefully embedding this AI technology into their day-to-day operational infrastructure, businesses can unlock its full power, delivering more of what they need with combined human intelligence.

Will Reese, Inizio Will Reese is chief innovation officer at Evoke, an Inizio company.

Matt Lewis Inizio Medical

Matt Lewis is global chief artificial and augmented intelligence officer at Inizio Medical.