By IESE Insight
Now that Europe’s sweeping General Data Protection Regulation (GDPR) went into effect starting May 25, 2018, we seem to be entering a new era of data transparency and responsibility. For Europe, at least, the onus has been shifted onto the companies collecting data to prove they need it and to dispose of it when they don’t. And elsewhere, big online players — e.g., Google and Facebook, especially after its Cambridge Analytica debacle — are clearly making moves to be increasingly transparent and expand consumers’ control over their own data.
Amid some companies’ nervousness (near panic) about GDPR implications and data-scandal backlash, a timely study by Tami Kim, Kate Barasz and Leslie K. John Kate reveals that increased transparency about data usage in online advertising can actually increase ads’ effectiveness — under the right conditions. When customers understand the benefits of data collection for a personalized experience online, increased transparency is positive. But “the scales tip in the opposite direction when transparency reveals unacceptable practices,” Barasz summarizes.
Why Do You Think I Would Want to Lose Weight?
Have you ever seen ads online that make you wonder what the provider knows about you and how? Sometimes the answer to that question will make you feel better about the platform serving it up — and sometimes it will make you feel worse.
Via a series of controlled experiments around online ads, the co-authors of the paper, titled “Why Am I Seeing This Ad? The Effect of Ad Transparency on Ad Effectiveness,” find that striking the right balance between consumers’ privacy and ad targeting, and using their data judiciously, are the keys to keeping consumers happy and engaged — and more likely to buy the product advertised. When done right, ad targeting should increase the relevance of the product or service to the consumers targeted, which is why the practice can be beneficial for both sellers and buyers in the marketplace.
Yet, naturally enough, being transparent re: ad-targeting practices scares consumers away when that transparency reveals something unsavory.
More specifically, the co-authors find that when third-party sharing occurs (i.e., when information is gathered outside a website on which a personalized ad appears), consumers tend to care more about their privacy and are less appreciative of being targeted. Consumers also react negatively when it is revealed that their information has been deduced or inferred from analytics. As the study mentions, there was a headline-grabbing example when U.S. retailer Target Inc. correctly inferred that a teenage customer was pregnant, based on her purchasing history of unscented lotion and other seemingly innocuous pharmacy supplies. The retailer then served up pregnancy-related coupons to her family’s computer before she had a chance to tell her parents about her condition. This is just the kind of inference that backfires for advertising. Online marketers, take note.
On the flip side, when consumers are told why they are seeing a certain ad on Facebook, for example, and their data is gathered in an acceptable manner (e.g., provided directly), these consumers express higher interest in purchasing a product and engaging with the advertiser. Even if it’s a weight-loss product. Trust is important, and transparency helps reinforce trust.
Methodology, Very Briefly
In two field experiments, the co-authors partnered with a company which runs redemption websites for loyalty programs to track actual behavior — with and without transparent claims about why customers were seeing a certain ad. In four other experiments, hundreds of participants, recruited via Amazon’s mTurk, answered questions about online advertising in various conditions in order to better understand what causes consumers to object to targeting and how marketers can use personalization while respecting people’s privacy.