The Quality Conundrum: Frequency Capping and MFA Websites

Weed coming through a crack

The dark side of wasted ad spend and “Made For Ads” (MFA) websites is well documented, especially after the extremely in-depth report put out by Adalytics was released a couple weeks ago.

The general response?

MFA websites are a major waste of advertising spend, but also a massive pain to circumvent, and advertisers are unlikely to find a magic key to avoid them entirely.

Our CTO, Erik Thorson went a level deeper into the report and more specifically dove into why frequency capping is so difficult to manage in a programmatic environment.

What is your general perspective on the report?

Overall I think that this is a topic that is not discussed as often as it should be. Millions of dollars are being wasted on MFA sites across the industry and have been since the advent of digital advertising. The data presented and the accompanying images make for a very complete and compelling argument that is going to be very difficult to deny, and the comprehensive view is nice to see.

It’s not one groups fault: Agencies, SSPs, DSPs and Brands themselves are to blame, and they each vary in their inability or lack of effort to address the problem. The distribution of blame gives the paper a much better footing in that the author understands the industry, from its internal dynamics to its many moving parts. This also makes a very good case for DSPs and agencies to use site blacklists from Adalytics (or similar) to avoid serving on these sites at all.

How does frequency capping factor in?

The first time I brought up the issue of DSPs not able to properly frequency cap was around 2013.

I was sitting with the CTO of Appnexus (now Xandr by Microsoft) at the time, explaining that we had a frequency cap requirement on a major financial advertiser which was spending well over a million dollars a month around tax season each year and that our frequency capping through the DSP was not working. We had it set to something like 10 and it was serving over 500 imps to one cookie id in seconds. He was explaining that they do in fact have frequency caps on the lines, that are working properly and that there are simply limitations to the architecture that doesn’t allow for guaranteed frequency capping.

I came to understand this infrastructure issue much better years later when we built our own bidder at Pontiac. In the end the only way to truly control frequency capping on MFA sites is to use black lists that make these sites ineligible for bids in the first place. The challenge with that strategy is that these sites are hard to identify (especially manually) and that there are new ones constantly popping up.

Technically, the architecture that we use on a DSP is distributed, high volume, and scalable is the reason that its hard to deal with frequency capping. This architecture is ideal for dealing with 100s of million of requests per minute, but it’s not very transactional. All of our writes and reads are asynchronous across many different servers (bid request handlers) with their own local caches as well as a centralized master cache which will all eventually be consistent, but over a small amount of time. In the time that they are not consistent, each server only knows about the requests that it receives and is not able to frequency or recency cap across the whole cluster for a couple of seconds.

Although writes are sent to the master with frequency information about an id, the whole cluster is not able to clone that data instantaneously. Because of this, if a site sends 1,000s of requests at once to a very large DSP (1,000s of request handlers) it’s very likely that many of those requests will be bid on without overlapping and instantaneous knowledge that other nodes in the cluster are also bidding. The banking industry deals with this using transactional databases where all data is consistent. This is not possible for the ad industry because of the large geographic reach (therefore slower network speed) of campaign and the random nature of its delivery.

Is AdTech able to support the tech needed to fully combat MFA?

Not really. AdTech just isn’t built to handle 100% accuracy in frequency capping, which is a feature more than a bug, and is likely the cause of the proliferation of MFA sites.

CAP theorem is a well known theory about this topic

Pulling right from wikipedia:

In theoretical computer science, the CAP theorem, also named Brewer’s theorem after computer scientist Eric Brewer, states that any distributed data store can provide only two of the following three guarantees:[1][2][3]

Consistency: Every read receives the most recent write or an error.

Availability: Every request receives a (non-error) response, without the guarantee that it contains the most recent write.

Partition Tolerance: The system continues to operate despite an arbitrary number of dropped (or delayed) messages by the network between nodes.

Ad tech favors Partition tolerance and Availability which roughly translates into durable and high frequency. A platform can guarantee a frequency cap on a cookie id, but it requires an impractial design for concurrency or high volume – the keystones of a high volume, performant DSP.

Why is it so hard to squash MFAs?

Developers built MFA websites to exploit and arbitrage the frequency capping problems inherent in DSP technology.

Campaign reported:

The root issue in the industry, which is the misalignment of incentives between marketers, agencies, and publishers.

There is also a common misconception on the buy-side that programmatic means cheaper inventory or better performance, although what ‘better performing’ means remains unclear.

The misalignment of incentives is possibly the most interesting part of the problem. Agencies are first and foremost motivated to spend their advertiser’s budgets in full, while maintaining performance to expected results. Advertisers don’t pay Agencies if they don’t hit performance goals.

The opportunity and problem with this is that MFA sites usually have good performance, ample inventory and low CPMs, so DSPs are able to hit their metrics efficiently and automatically by targeting these sites. Additionally, MFA sites are not always obvious when you see them in reporting and performance optimization models (both manual and automated) will favor them. To combat this, “Better Performing” must include the quality of the inventory itself and “Quality” should be as common of a metric as CPM and CTR.

The good news? CTV growth enables better use of budgets. In CTV, maintaining quality is a bit easier because of transparency (channel, network, series), publisher name brand recognition and the general difficulty of getting an MFA CTV app on a set top box such as Apple TV or Roku. It’s not without its pitfalls and there are plenty of bad actors, but it’s a smaller universe and one that is easier to police.

So where does that leave us?

Ultimately, I think the amount of money wasted overall from the top brands is the most shocking part of the article. In the end, the brands themselves are going to be the most likely to drive real meaningful change, they have the money and they control the execution requirements. On the DSP side we offer all of the tools that one would need to combat this.

For marketers and buying platforms, they are trying to balance the need for performance and efficiency while eliminating bad actors. The problem is that full elimination of MFA may result in worse performance and higher costs; a hard pill to swallow for an industry hooked on efficient performance.

Share article:

Other Posts

Geography is a great indicator of audience cohorts. In a cookieless world, where someone lives can be the
Pontiac and Chalice have partnered to deliver cutting edge media targeting on CTV. Together, they offer the ability
We have a basic hypothesis that most marketers and agencies are not paying enough attention to reaching users