Interesting news from Google, which have announced a move to “data-driven attribution”. OK, that’s a complex term for the first sentence of a blog post, so let’s unpack what is going on…


What is Attribution?

Hopefully, you understand attribution: in this context, it’s giving credit to the different marketing activities that resulted in achieving an objective (e.g. online purchase, conversion, etc.). Often the algorithms used are really simple – for example:

  • Last click attribution: the thing that was clicked last (e.g. display advert, search ad, email) before the objective was achieved gets all the credit.
  • First click attribution: the first thing that was clicked (well actually that you tracked the person clicking) gets all the credit.

These approaches obviously have potential problems.  What if your email and advertising system are separate – both systems would claim all the credit. Of course, we also know that few activities are the “magic bullet” that causes you to achieve an objective with a particular prospect, and that marketing tactics work together over time. So more complex attribution approaches such as linear and time decay were added to try to better reflect multiple ads (or other activities) working together.


What is Data-Driven Attribution?

This is a tough question. We don’t exactly know. Google describes the operation of data-driven attribution as:

“Data-driven attribution gives credit for conversions based on how people engage with your various ads and decide to become your customers.”

This means that there is some algorithm tracking users and determining how much credit different activities should take for converting the prospect to a customer. Unfortunately, and unsurprisingly, Google isn’t telling anyone how it works. It’s also likely they will change the algorithm on a frequent basis.

We do understand the basics of data-driven attribution: put very simply Google looks at the different things in your marketing campaigns and determines how the probability of reaching the goal is impacted by seeing different adverts. In principle this should be a good thing.


What Will Happen With Data-Driven Attribution?

We do know this! You’ll see credit for conversions (and conversion revenue if you track that) applied across all of your advertising. This could mean that Google decides to award some credit for conversions to ads that actually get no clicks. It’s a little counter-intuitive but does make sense: seeing an advert could make someone more likely to convert on a future advert, even if they don’t click.

One thing we won’t see is Google giving any credit to any marketing activity outside of their advertising domain. So don’t expect to see your adverts on trade publication websites, email marketing or PR credited. You’ll only see attribution assigned to things Google sells to you.


Why is Data-Driven Attribution a Concern?

In theory this should be a good thing. We are getting a much better model of what causes a person to convert and therefore we should be able to better optimise our ads. But the reality is that when you look at B2B PPC advertising, you’re often trying to reach a very small and specific audience. So it’s easy to get things wrong because probability gets in the way. The reality is that you need a significant amount of data to make a model like this work, and if you are targeting CEOs of fortune 1000 electronics companies (and doing it well), you might not get sufficient data. The other problem is that in small samples external factors can skew the results: if the small number of CEOs of Fortune 1000 electronics companies who buy our product are big users of YouTube, data-driven attribution might give credit to a YouTube ad, even if the ad is ineffective.

Just to be clear, as numbers targeted get larger, the issues around data become less challenging. For many B2B technology companies, however, it may not be possible to get enough data to really understand whether the advert should get credit or not for the conversion. We see this all the time with clients who have one ad in an A/B test out-performing another at the start of the campaign. As the numbers build up, the initial indication that the ad was better is proved wrong, and we realise that it was just randomness skewing the early results.

Oh and we’re going to have to just trust Google on its attribution. But they would never implement something to make their advertising products look like they were performing better than they were… would they?


What’s the Solution?

As an engineer, my go-to solution has to be maths. In this case, a good understanding of probability and statistics is going to be your best friend. No AI can overcome the laws of mathematics, and it’s impossible to know whether a result is caused by randomness. So it’s important to understand there will be limits to how well the model can work with smaller data sets.

If you know how probability works, and understand sample sizes, however, you’ll be able to navigate the new approach and hopefully benefit from the AI Google has deployed. If not, I’d recommend playing with something like our AB test calculator to find out how probability means that sometimes the ad with the better click-through rate isn’t necessarily bad, but pure chance has made it seem as if it is.


More Information

Check out the following for more information