What you didn’t know about optimising viewability in RTB

Zach Schapira
By Zach Schapira | 22 April 2016
Zach Schapira

Viewability is a vexed issue in digital advertising. A recent study conducted by the team behind our own Peer39 machine learning engine shows that the majority of websites have shockingly consistent viewable rates.

This means that despite all the conceivable inputs one might possibly use to optimise viewability in a pre-bid buying environment—from the position of the ad above or below the fold, to creative size, ad format, and more—the best predictor of an ad’s viewability is actually the historical performance of the website that the ad is served to.

This is groundbreaking, since much of the conventional wisdom about viewability points precisely to its complexity and unpredictability.

Some users will skim content and scroll past ads very quickly, while others will read every word and be exposed to each ad along the way. Some users will open multiple tabs at once and never get to reading every article, while others have the habit of reading everything they click on. And while some users access publisher sites from large screens, others will access content from small screens that obscure ads along the margin.

Of course, no one ever believed that viewability was only behavioural.

There were too many structural factors at play to believe that. For example, research by the Peer39 team discovered that small ads have a better chance of their pixels being in view than large ads (which is why in the US, the Media Ratings Council, later revised its viewability definition to accommodate a 30% threshold for ads with more than 242,500 pixels, instead of the usual 50% threshold).

We also discovered that longer ads, such as a 300x600 banner, were more likely to pass the one-second threshold than short ads, such as a 300x250 banner, since it takes more time to scroll past those extra pixels.

But despite all efforts, no one has yet found or promoted “the most viewable ad size.” Even if there were such a thing, it’s not clear that everyone would shift to it. For example, even though smaller ads might be more viewable from a mathematical standpoint, larger ads still are more likely to drive engagement and brand lift – something that the viewability metric was only intended as a baseline for achieving anyway.

However, once media buyers started holding publishers accountable for delivering performance against viewability, and especially after the IAB (US) recommended transacting on a 70%-viewable threshold, publishers now had even more incentive to optimise their sites towards viewability.

Our research demonstrates that whereas behavioural factors could not be controlled, structural factors could. As a result, many publishers rearranged their site layouts to give ads more favorable placement. They adopted formats more conducive to viewability.

They optimised sites for mobile, incorporating more-likely-to-be-viewed ads into their layout in a way that publishers who simply shrank their desktop view could never achieve. And of course, publishers who produced content that users were more likely to engage with were also rewarded with consistently higher viewability rates.

All of this begs the question: do these factors really make any difference? Does the consistency of these structural best practices outweigh the variability of behavioural change?

Peer39’s study, which covered nearly one billion impressions in a real-time bidding environment, shows that it does, in a big way. There are publishers who achieve consistently high viewability; there are publishers who always fall below benchmarks.

In fact, 65% of domains have average viewable rates that hardly vary (delivering a standard deviation of less than 0.1), from one day to the next.

And the majority of those domains actually have a standard deviation of less than 0.01 in their average daily viewable rates.

This means that for a media buyer looking to optimise viewability, all they have to do is target high-performing domains. And for a media buyer simply looking to eliminate the worst quality placements, all they have to do is avoid the lowest-performing domains.

The domain-centric approach to viewability targeting is relatively new. As this model continues to help marketers control viewability rates, bringing this down to the granularity of sub-domains or even specific URLs will drastically improve accuracy even further.

Big holding groups including GroupM and IPG are driving initiatives that will promise 100% viewable digital media bookings. However, a standard needs to be established here before moving to trading on viewability. Marketers will see the real benefit from pre-bid data that focuses on high viewability ad inventory.

Having said that, viewability is only one component. Marketers also need to increase relevance to maximise engagement and performance. Viewability pre-bid data should really be used with brand safety and the best application of the right context for an ad. This enables marketers to access the safest, highest quality, and most relevant page environment.

The bottom line is that not all impressions are equal. Advertisers want to know not only that they are paying for media where audiences that are more likely to see their digital ads, but also that those ads are brand safe, fraud-free, and shown in the right contextual environment.

Viewability and verification must work hand in hand to drive superior campaign performance. The key is to use historical data to target future media buys, seek out quality domains, or websites, and always refresh your targeting data, to ensure changes in publisher layout and content are taken into account.

comments powered by Disqus