Building an organizational culture around data-driven decision-making requires the right processes and a leadership team that is committed to performance transparency and continuous improvement — particularly when in-going hypotheses are ultimately disproved in the real world.
Is it better to deliver good news first or bad news first? This is a common point of debate in the corporate world, and one need not do much digging to find passionate advocates on either side. Yet, according to Daniel Pink, a New York Times bestselling author and former speechwriter for Vice President Al Gore, there is, in fact, a “right” answer.
“If you ask people what they prefer, four of five prefer getting the bad news first,” he told The Washington Post last year. “The reason has to do with endings. Given the choice, human beings prefer endings that elevate. We prefer endings that go up, that have a rising sequence rather than a declining sequence.”
Irrespective of whether you agree with Pink, there is an important lesson embedded within his assertion — and the broader debate to which it is contributing. Critically, the question at issue is not, “Should I deliver bad news?” It is, “When should I deliver bad news?”
While there is an argument to be made that an impulse toward ostrichism is ingrained in human nature, in life, especially in business, embracing bad news — and, more importantly, learning from it — is an essential component of success. A poor performance is not tantamount to a resounding failure as long as it serves as the first term in a rising sequence, a truth of which many professionals in the marketing space still need to be convinced. In fact, we believe that performance transparency is arguably the cornerstone of a productive, trusting, results-driving consultative relationship.
Learning from Failing Fast
Marketers — and data analysts on marketing teams in particular — face immense pressure from both internal (upper management) and external (clients) stakeholders to strike gold with our first swing of the pick. Ours is a results-driven business, and its intermediate position between hard and soft science makes many marketers feel obliged to constantly prove our ability to move the needle for our clients.
Most marketers are accustomed to their clients’ asking, “Is all of this worth it?” And, because of the nature of the business, there is a strong incentive for marketing analysts to find evidence that supports a response of, “Absolutely.” It is always tempting for marketing professionals to focus on the metrics that reflect most favorably on their efforts, structuring their analytics reports in such a way that the primary takeaway is that everything is going swimmingly. But by framing hitting a target — as opposed to the target that was specified at the outset of a project — as a success, marketers lose a tremendous opportunity for learning, growth, and innovation.
Data-driven marketing is an iterative science, as, arguably, are all sciences. Indeed, scientific hypotheses are, by definition, proposed explanations based on limited evidence that serve as starting points for further investigation. They are not meant to be axioms, but “educated guesses.” As such, expecting perfection from a marketer’s initial plan of attack is counterproductive insofar as it dis-incentivizes marketers to experiment with new approaches.
Instead, like our counterparts in the startup and software development spaces, marketers should be encouraged to fail fast. Data-driven projects begin with our posing key business questions, assembling media and creative strategies, selecting KPIs, and establishing performance benchmarks, but every stakeholder should understand that all of these project components are subject to change — and that making such changes strategically can help generate a considerable return on investment.
If a project’s underlying hypothesis is borne out to be false, the proper response is not to turn a blind eye or paper things over with metrics that speak to aspects of the project that worked well, but to investigate where and why our educated guess went astray, and make informed adjustments as appropriate. Only then will we be able to deliver better outcomes during the next project or the next phase of the current project.
Striving for Shades of Grey
To ensure they make the right adjustments (as opposed to just some adjustments) — and, in so doing, build trust among their clients and/or supervisors — marketing analytics professionals must move beyond evaluating performance in binary terms. Answering questions like, “Did our ad spend deliver the desired ROI?” is important, but does not provide the whole story of a campaign — nor does it facilitate continuous improvement.
To supplement their “yes or no questions,” marketers should be asking questions like, “Why did certain components of the campaign work, and what does this tell us about our client’s brand, the location of their customers, and the context in which these customers were engaged?” and “Why did certain components of the campaign not work, and what does this tell us about these same things?”
As an example, imagine a pharmaceutical company that wants to market a new migraine drug. One facet of its product roll-out might involve a paid search campaign built around buying keywords like “what are migraines” and “migraine symptoms.”
Traditionally, this company (and/or its agency partner) would link the success of the campaign to reach metrics or overall awareness — measures that do not provide much visibility into the effectiveness of the dollars that are spent. For instance, if the bounce rate of the campaign’s ads’ landing page ended up below the target benchmark, the campaign was considered a success; if it did not, the campaign was considered a failure.
Such a black-and-white evaluation is not particularly instructive. What if the landing page’s bounce rate was incredibly low, but the campaign did not drive a discernible uptick in conversions (or some other measurable, value-linked action)? Should the campaign really be considered a success? The short answer is, “Probably not.” To gauge the true success (or lack thereof) of the campaign, the company would need to dig deeper and investigate whether the campaign drove improvements to key outcomes-based metrics.
In the pharmaceutical space, a campaign’s reach is less important than the extent to which the campaign drives meaningful behavior change among its target audience. As such, instead of running a single awareness-oriented campaign targeted at a general audience, this company would be better served by running multiple campaigns designed to reach patients with unique content in the moments that matter — a one-size-fits-all approach is rarely the answer.
One campaign might feature content that is carefully tailored to search engine users’ expectations. These consumers are looking for information about the causes and symptoms of migraines, and that is precisely what the landing page for this campaign would provide. A brand concept or call-to-action might be included toward the bottom of the page, but content that directly pertains to the search terms in question would be foregrounded. In short, the company would be aiming to meet consumers’ expectations before attempting to drive a meaningful behavior change like nudging them to speak to their doctors about a specific product.
By contrast, a second campaign might take a more directly promotional approach, delivering a branded message before providing information about the causes and symptoms of migraines. If consumers are performing these searches, they or someone they care for likely could do with some migraine medication, so why not cut right to the chase?
Running these campaigns simultaneously would give the pharmaceutical company actionable insights into which approach is more effective at driving behavior change, and where there are opportunities for optimization along the consumer journey. While one strategy might end up resonating more with younger, mobile-first consumers, the other might end up resonating more with suburban, high-income consumers. By combining this approach with careful tracking and analysis of strategically selected KPIs, the opportunities for A/B testing to drive continuous improvement would be endless.
Crafting a Storybook Ending
This access to more nuanced insights allows marketers to think about — and fine-tune — not only keyword bidding strategy, but marketing activation, back-end technical requirements, and service design. It affords us a panoramic view of consumers’ journeys, and enables us to have complex conversations about whether we are meeting consumers’ expectations, whether our predictions have been accurate, whether we are engaging the right people at the right time, and whether our content and creative strategy is aligned with our media strategy.
Questions stemming from these conversations can seldom be answered with a simple, “Yes,” or, “No,” and at the end of the day, that is a good thing. Innovation occurs in the face of setbacks, not despair, and there is a distinct competitive advantage ready to be seized by marketers willing to learn as much from seeds that do not bear fruit as from those that do. If we bury our heads alongside these seeds, we will inevitably struggle to improve our weaknesses and bolster our strengths.
And ultimately, as Pink pointed out, “human beings prefer endings that elevate.” People — including clients — do not mind rocky starts as long as things turn out well in the end. But to get to a place where there is routinely good news to follow the bad, marketers must recognize, embrace, and leverage the inherent didactic value of under-performance when it happens.
Written by Kevin Troyanos & Kate Gattuso
Kate Gattuso – Associate Director, Business Intelligence – Publicis Health Media
I bring a diverse background across a variety of media, technology, and clients, including work with GSK, Shire, Merck, and Daiichi Sankyo; B2B, CPG, and Fintech brands; through to new & emerging technology work. My professional career has led me to a focus on healthcare for Consumer and Professional, leading Business Intelligence on accounts such as the GSK Rx franchise, Shire, EMD Serono, Supernus, and Daiichi Sankyo.
Working on such a diverse portfolio of measurement strategies with a focus on tying investment impact through marketing strategies has contributed to my data-driven approach centered around a holistic and personalized strategy. My passion to understand consumer behavior and how user experience drives change, has enabled me to craft diverse digital strategy focused on business impact, and bring innovative decision making to my clients and team.
Kevin Troyanos – SVP, Analytics & Data Science – Saatchi & Saatchi Wellness
I lead the Analytics & Data Science practice at Saatchi & Saatchi Wellness. I have focused my career within the healthcare marketing analytics space, empowering healthcare marketers with data-driven strategic guidance while developing innovative solutions to healthcare marketing problems through the power of data science.
I’ve worked to measure, predict, and optimize marketing and business outcomes across personal, non-personal, digital, and social channels. I’ve led engagements with brands that span all stages of the product lifecycle, with a particular focus on established brands.
My role is to guide the departmental vision and lead innovation initiatives, effectively positioning marketing analytics as a competitive differentiator and organic growth driver for the agency at large.