Just a few decades ago, media planning and buying was all “traditional”: print, radio, TV and out-of-home (OOH). Through these channels, broad messages were broadcast to large, general audiences since the ability to target more precise audiences was difficult to impossible. We didn’t have Facebook and Instagram; we didn’t have YouTube and online video; we didn’t even have banner ads on every website. Come to think of it, we didn’t even have websites.
Fast forward to the present, and digital marketing channels are the hottest ticket in town. The digital media planning and buying process comes with much better strategic targeting and tracking capabilities, but these capabilities are only as useful as the data that powers them and the effort invested in analysis. Data provides deep insights at all stages of the process. It’s used to create media plans and profile high-value segments of your audience, to help you decide which impressions to purchase on programmatic ad exchanges, to help determine which users should see which creative and to learn which consumers are responding to advertising.
With data informing so many decisions, it’s no surprise that there is a lot of money at stake for advertisers. Good data sourcing is not cheap to procure - advertisers spent $19.2 billion on third-party audience data and related solutions in 2018, according to IAB. If a dataset turns out to be unusable for your brand or use case, then you’re out that money and still no closer to achieving your goals.
Data is central to the media planning and buying process these days, but if you aren’t receiving quality datasets, then you won’t see the results you’re expecting. Fortunately, if you know what you’re looking for, you can be more confident in the data you use every step of the way.
A lot of data used in the digital media planning and buying process will be from a third party. This is data purchased from a company which has no direct connection with the consumers whose data is collected. Of this data, you have audience data and inventory and verification metadata.
Audience data gives you information on consumers. These are lists or categories of users collected based on specific demographics or interests they’ve exhibited. These datasets can be used to target consumers based on their income bracket, their interest in purchasing a certain product or service, their age and even their offline identity. Inventory and verification metadata is data related to ad inventory, such as its viewability or legitimacy. It also gives you information on the context of the ad, which is great if you want to ensure that your ad won’t be running alongside anything you don’t want your brand associated with.
Not all third-party data is the same. There are many different sources and methodologies utilized to construct datasets, which is what makes it so hard to know if you’re really getting what you pay for. In the past, a lot of data was gathered by using third-party cookies, but that’s all changing. Safari, Firefox and Internet Explorer already ban cookies, and Google Chrome will be following suit. This only adds more confusion to data collection methodologies and accuracy.
This year, the Association of National Advertisers’ Trust Consortium released Data Sources for Media: A Buyer’s Guide to help advertisers better assess data vendors. In it they provide guidance on evaluating data based on its accuracy, precision, freshness, coverage and usability.
Usually, datasets are created through inferences. The creator infers that a specific user behavior means that these users have a certain mindset or intention. For example, users who visit a real estate website and look at properties are planning to move and will be interested in moving services. These kinds of inferences are important even if they aren’t always right. Without them, it would be impossible to create audience lists without surveying the entire population of internet users.
Accuracy, in the case of datasets, means: are the inferences made to construct a given list correct? How did a data supplier come up with this data? Did they look at public records, the users who made an account on a specific website, consumers who have recently made similar purchases or another source? Did they look at the information in a novel way to create their dataset? Does the reasoning and logic employed to construct it make sense? If the inferences are accurate, then a dataset is more likely to perform as expected.
Precision is a question of modeling for a dataset. The inferences used can be correct, but if the data collection methodology that was used is flawed, then the audience list can be less precise. Datasets can be modeled as either deterministic or probabilistic. Deterministic methods require that every person on a list is observed performing the targeted behavior. Probabilistic methods will include some users who were directly observed as well as others who have similar behavioral or demographic profiles to the users actually observed.
Obviously, deterministic methods are seen as more precise, but there is a tradeoff. Greater precision comes with more cost and reduced scale, whereas probabilistically created lists will cost less and have more scale. Neither methodology is inherently bad. If you’re making an ad campaign that’s more top-of-the-funnel, a larger scale audience list that is less precise wouldn’t be a problem. The real red flag for you is if a data supplier won’t reveal their data sources or modelling to you in the first place. If you have no idea how precise data is, you won’t be able to best apply it.
Recency is a pretty simple concept. Essentially, it’s questioning how fresh and new the information in a dataset is. It doesn’t matter how accurate and precise a list is if it’s six months old. The frequency at which a dataset is renewed is especially important for interest-based audience lists. If someone wanted to buy a new car three months ago, they could’ve already purchased one or decided against a purchase by now.
Before buying a dataset, make sure you understand how often it’s updated. This can be a percentage of the audience that’s still valid. If it takes four weeks to complete the buyer’s journey for a new car, but the list is only refreshed every two weeks, then 50% of the data is out of date.
By coverage, we really mean “scale”. Digital advertising gives you great ability to scale your target audience and reach people you wouldn’t be able to through traditional media. However, you don’t want to be so blinded by the size of a dataset that you ignore its actual applicability.
Bigger does not always mean better, after all. A list may only be large because a vendor made overly broad inferences or got a little too loose with their modeling methods. There could be a lot of data on a list that isn’t actually relevant. And, you can’t ignore the amount of time and effort it will take to manage large numbers of media buys against small audiences. Of course, a dataset can also be too narrow. If a vendor made incorrect inferences regarding a potential audience or didn’t stick to a firm modeling approach, it’s likely they’ve missed key segments of consumers you could be targeting.
When you’re considering a dataset, think about how much scale is really necessary for your use case. Does the audience list seem too small or too big? It’s not unheard of to see audience lists that are bigger than the actual target population is, so if you’re looking to run a campaign that targets people closer to the bottom of the sales funnel, pay closer attention to the coverage of a dataset.
No matter how accurate, precise and applicable a dataset is, your purchase will still be a waste if you can’t activate it. Sharing data between different sources and media platforms can come with significant data loss if the two parties don’t have corresponding profiles. For instance, having an audience list based on age demographics won’t be very helpful if your media partner doesn’t track that in their database. Moreover, if you’re limited in your ability to consistently identify users on whatever demand side platform (DSP) you’re using, it’ll be difficult to match profiles.
While some loss of data is not unexpected during onboarding, the data suppliers, DSPs and data management platforms (DMPs) should set reasonable expectations to avoid dismal match rates.
The Data Stewards
With 35 years of experience in media, The Ward Group has seen how data has changed the media planning and buying process, and we don’t blame you for thinking this can all be pretty overwhelming. We’ve been at this long enough to know when data is misleading us or genuinely providing good insights into the target audience. If you’re ready to take the mystery out of digital media planning and buying, our media stewards are just a call away.