Organizations must make a concerted effort to address the end-to-end efficacy questions raised by artificial intelligence if they are to realize the technology’s full potential.
In parts one and two of this three-part series, I explored some of the key reasons organizations should resist granting total operational autonomy to their artificial intelligence (AI) solutions: ethics and data privacy and security.
Provided they build their solutions deliberately — by, for instance, adhering to the guidelines issued by governing bodies like the Organisation for Economic Co-operation and Development — the ethical, privacy, and security challenges posed by AI need not derail organizations’ efforts to integrate this transformational technology into their day-to-day operations. That said, safeguards that ensure that the use of AI does not run afoul of normative principles and/or codified laws do little to ensure that an AI solution actually drives results — such safeguards help organizations leverage AI responsibly, not necessarily effectively.
To be clear, there is no doubt that AI’s capabilities have matured considerably over the course of the last decade. Early in the 2010s, the most sophisticated algorithms were only able to correctly categorize around 70 percent of the images they were fed, but by 2018, their image categorization accuracy had jumped to 98 percent — three percentage points higher than the average human’s accuracy. Indeed, as I covered in part two of this series, AI is becoming so powerful that analytics professionals in the healthcare space must implement precautionary measures to prevent their AI solutions from autonomously re-identifying de-identified data in violation of HIPAA privacy protections.
This rapid, ongoing maturation notwithstanding, there is — and will be for the foreseeable future — a limit to what AI can do effectively. Particularly when it comes to endeavors like marketing that involve insight, creativity, and empathy, AI can elevate humans’ work in a range of ways, but it cannot do it for them.
The Myth of AI-Driven Creative
In recent years, a mix of terminological imprecision and hyperbolic optimism about AI’s capabilities has generated a great deal of discourse on “AI-driven creative.” When armed with a fine-tuned algorithm and access to enough audience data, a marketer can simply hand over full control to an AI solution and let it design and manage a campaign from end to end — or so the argument goes. The reality is not so straightforward. Just as machine learning algorithms are prone to reinforcing any human biases that are found in their training data (see part one of this series), they are also prone to falling into marketing feedback loops when left to their own devices.
An AI solution trained on data drawn from an organization’s recent successful campaigns may well “learn” the ideal approach for crafting campaign messaging given specific market conditions. However, unlike an AI solution designed to diagnose an illness based on X-rays or CT scans, an AI solution designed for marketing must operate within a constantly shifting landscape. Barring a major medical breakthrough, a diagnostic solution will maintain a consistent degree of efficacy by continuing to identify the same — or very similar — patterns in patients’ medical data. Conversely, a marketing solution that continually utilizes the same keywords or design elements that drove results in past campaigns will lose its efficacy in a matter of months.
While traditional economic theory dictates that all consumers are rational — and, thus, predictable — decision-makers, fields like behavioral economics articulate a compelling case that consumers frequently make decisions that defy economic rationality. In plainer terms, most consumers are unpredictable, inconsistent, impulsive actors — that is to say, they are human. In the healthcare space, this irrationality (in a non-pejorative sense) is only exacerbated by an incredibly dynamic market landscape and the fact that purchasing decisions often bear upon a patient’s quality of life — or even their life itself.
Simply put, consumers are not mathematical functions. Today, inputs X, Y, and Z might prompt a consumer to take action A; tomorrow, the same inputs might prompt the same consumer to take action B. This unpredictability is a challenge inherent to marketing as a whole — not AI-driven marketing specifically — but dealing with it in a productive way requires engaging with consumers on their own terms. It requires a fundamental understanding of the urge to act on emotion instead of rationality. It requires something that an algorithm cannot develop (at least not yet): insight into the human condition.
Keeping a Human in the Loop
This is not to suggest that AI has no place in marketing, but rather to dispel the fantasy of a set-it-and-forget-it, end-to-end AI marketing solution. Even in an era in which an algorithm can correctly categorize 98 out of 100 images, when it comes to the creative aspects of marketing, a human must be kept in the loop.
Resonant messaging emerges from the interweaving of information (e.g., sales data), observations (e.g., trendlines), analyses (e.g., causes of trendlines), and insights (e.g., why consumers behaved the way they did), and while AI solutions are capable of producing the first three, insights are a uniquely human domain. That said, insights are neither universal nor easy to come by, and the more a marketing strategist knows about which “why” they are tasked with unearthing, the better.
To that end, AI solutions are perfectly equipped to level-up an organization’s approach to audience segmentation. Analyzing data drawn from an organization’s recent successful campaigns may not enable an AI solution to effectively craft campaign messaging that speaks to a new audience as part of a new campaign, but it will enable the solution to produce a breakdown of the kinds of messaging that worked for specific kinds of consumers as part of past campaigns with an unprecedented level of granularity.
For instance, over the course of a past campaign, one audience segment may have responded to messaging that spoke to the affordability of a treatment, another may have responded to messaging that spoke to the ways in which the benefits of the treatment are different than the benefits of a more established treatment, and a third may have responded to messaging that spoke to the safety of the treatment despite its recent introduction to the market. It is entirely possible that each of these audience segments was composed not of patients in the same age group, tax bracket, or disease state, but of patients whose profiles share a complex matrix of a dozen seemingly random factors.
By highlighting these subtle commonalities that would likely fly under a human analyst’s radar, an AI solution helps strategists and creative teams zero in on where they should direct their efforts. A detailed breakdown of these three audience segments empowers a strategist to undertake precise qualitative research in pursuit of a key insight instead of banging their head against a wall wondering why they are unable to figure out the dramatic variation in behavior among similarly aged patients with the same income (or any other “common” audience segment). The AI solution cannot arrive at the insight for the strategist, but it can point the strategist in the right direction.
Once the strategist arrives at a key insight that speaks to how the messaging for a new campaign should differ from the messaging for a past campaign — and how it should remain the same — the onus shifts to the organization’s creative team. By interpolating the strategist’s insight into copy and design elements that worked well in the past, the creative team can craft new messaging that is at once informed by the target audience segment’s shifting preferences and grounded in proven fundamentals.
At this point, AI can re-enter the fold as a final, objective “verifier” of the creative team’s interpolations. A machine learning algorithm trained on an organization’s historical campaign performance data can score an email subject line or piece of imagery against what has worked in the past, a score that, when filtered through the strategist’s insight(s), can indicate whether the creative team hit or missed the mark. As a result, instead of waiting months for a retrospective report on the performance of the campaign’s first tranche of ad spend, the creative team can go back to the drawing board to fine-tune their work before the campaign even begins.
A Collaborative Future
Clearly, running an effective marketing campaign is a varied, multistep process. When deployed responsibly, AI solutions stand to revolutionize both ends of this process through advanced retrospective analyses that facilitate nuanced audience segmentation and real-time, in-market A/B testing. However, the bridge between what worked in the past and what is likely to work in the future is built from two distinctly human materials: insight and creativity. An AI solution can give marketers a point of departure and tell them whether they arrived at the right destination, but it cannot chart the course between the two.
And, ultimately, for the reasons outlined above as well as for ethical, privacy, and security reasons, the most effective AI deployments — in healthcare marketing, in healthcare more broadly, and in the business world at large — will continue to be those that position AI as an augmentor of organizations’ existing capabilities. Pitting AI-driven operations against human-driven operations creates a false dichotomy. It is by working together, human and machine in tandem, that we will cross over the threshold into a new era of efficiency, effectiveness, and, if all goes well, prosperity.