A recent article in Ad Age claimed that:
New research from Omnicom Group’s OMD may move the seemingly fuzzy concept of engagement beyond the realm of academic debate by proving it really does move sales. The research indicated that not only does consumer engagement with media and advertising drive sales, but it also can drive sales more than media spending levels.”
The study, which covered three unnamed financial services brands, found three drivers of consumer brand preference: 1) how engaged consumers were with the ad itself, with a weighting of 49%; 2) how engaged consumers were with the media where the ad appeared, weighted at 31%; and 3) how much consumers like the brand at the outset, with a 20% weighting.
My take: The problem with these conclusions start at the beginning — with the definition of customer engagement as time spent viewing an ad.
A few months ago, I proposed a definition of customer engagement:
Repeated — and satisfying — interactions that strengthen the emotional connection a consumer has with a brand (or product, or company).”
According to Wikipedia, this definition “has gained currency and was used in the first international Annual Online Customer Engagement Survey”, conducted by British consultancy Cscape (which built upon, and improved, my definition).
But OMD (and, for the most part, the rest of the advertising industry) ignores this definition. It reduces the concept of engagement to the level of interaction a consumer has with an ad, and then equates time spent viewing an ad with driving “brand preference.” These findings are hard to swallow. They ignore:
1) Customer experiences. The extent to which a consumer’s experiences — sales experiences, support and service experiences, and experiences using the product or service — impact brand preference is either completely ignored or buried in the concept of “how much a consumer likes the brand at the outset” before viewing an ad.
2) Direct marketing. Financial services marketers are active direct marketers, extensively using direct mail and email. How OMD can tie ad “engagement” directly to sales, without incorporating the impact of these other marketing channels, was not explained. Increasingly, financial services marketers are adopting net measurement techniques, and developing uplift models to predict and measure the incremental impact of specific marketing actions. Yet OMD apparently has no problem directly attributing sales to time spent viewing ads, without factoring in the impact of other influences.
3) Sales effectiveness. If the OMD study had linked its measure of engagement to brand affinity, I might not have such an issue with it. But taking the impact to ROI (i.e., a sale), the study ignores the fact that many financial product sales are intermediated by a sales person. An ad may drive response, but to simply assume that that response produces a sale is wrong. Many a bank branch or mortgage rep has blown a sale due to poor salesmanship.
The question the study attempts to answer — “what impact does ad viewing have on sales?” — is simply not an answerable question.
The questions that need to be answered are “how do consumers buy?” and “what is the appropriate role and impact of various media and touchpoints in the consumer’s decision process?”
To address these questions, financial services marketers need to develop a “theory of the customer” — what kinds of relationships do they want, what does it mean to be engaged given different types of relationships, and how to measure and drive those forms of engagement. Reducing the concept of customer engagement down to “time spent viewing an ad” denigrates a potentially important strategic concept.
Unfortunately, financial services marketers looking for help answering those questions and addressing these issues are going to have to wait while the advertising industry plays its “my metric is better than your metric” games.