Month: April 2020
The heralded end of cookies has done nothing to simplify the process of measuring attribution. On the contrary, it is looking even harder to piece the customer journey together. Now is the ideal opportunity to review the methods and raise questions about the purpose of attribution.
Puzzle pieces scattered across the marketing table. A 10-year quest for the Holy Grail with nothing to show. These are the images that often crop up when thinking about attribution. It is one of those subjects where there seems to be a gulf between theory and practice.
In terms of theory, attribution promises to assess the weight of the different touchpoints in the customer journey in a bid to credit partners according to their true value and also enhance the marketing mix. This challenge obviously plays a key role in today’s omnichannel environment. The topic has seen its fair share of competitors trying to outdo one another, with technological wonders driven by artificial intelligence, and magical algorithms and models that look more like pretty black boxes.
Attribution vs. data black holes
In practice, the picture is somewhat different. The first reason is that the vast majority of brands have adopted a last-click attribution model. The second reason is that the snapshot of touchpoints is sketchy to say the least. The situation will hardly improve with the heralded end of third-party cookies and the growing trend of regulations tightening up their requirements. Safari may currently represent something of a black hole in terms of attribution after rooting out third-party cookies, but it will not be an isolated case for long. Black holes are going to spread throughout the customer journey.
The prospect of a web devoid of all third-party cookies is raising question marks among all the players involved in attribution and which strategies they are going to implement. It may come as no surprise that the probabilistic method, which relies on analysing large data volumes, is losing its appeal, since those volumes are shrinking in light of the current technological and legal context. However, interest is starting to surge for the deterministic method, which aims to reconcile touchpoints featuring reliable data. In practice, this method may involve small data volumes – achieving 6% of reconciled data is already considered to be a satisfactory result – but it has the merit of being part of a global effort to create value from first-party data.
Priority for first-party data
This is the good news in this particular situation: there are many drivers for amassing first-party data and improving their reliability, from logins (on apps and websites) to store loyalty cards. Until such time as there is greater clarity about the solution that will take over from cookies, focusing on the touchpoints that can be used to identify prospects and customers appears to be the most pragmatic approach for bringing a more tangible edge to the attribution model. Brands have understood the issue and are ramping up their initiatives, including systematic data collection at the checkout and a growing amount of emails sent out during the ordering process to map each customer with all their devices.
This strategy is often based on owned data and helps combine the last-click model with a customised model according to the touchpoints mapped. Another way is the full last-click model or unique multi-touch model. It has widely been acknowledged that there is no such thing as the ideal attribution model. There are only models geared towards the known customer journey and the reliability of the associated data.
Is attribution an end in itself?
Another advantage with the deterministic approach and the resulting actions is that it helps drive home the point that attribution is not an end in itself, but a means. The purpose is still to give greater context to the growing number of activations. In other words, sending the right message at the right time to the right person via… the appropriate touchpoint.
Measuring attribution was already a highly constrained exercise before third-party cookies were singled out, which may be seen as a (major) constraint as well… or an incentive to approach the subject from a new angle. For example, less effort could be invested in measuring correlations in favour of analysing the direct impacts according to specific geographic and time variables.
When it comes to attribution, the post-cookie age could well be synonymous with a back-to-basics approach driven primarily by hyper-contextualisation and first-party data.