Internet.com (Jan. 9, 2001)
You have just spent $100,000 on an Internet advertising campaign. Your boss expects results. And you have them — two nice detailed reports. The only problem is, they say very different things.
Welcome to the world of Web advertising measurement.
“If you are an advertiser and you are serving to a host of sites and you get back numbers different from what was sent out — sometimes by 20 to 40 percent — you don’t have any idea what has just happened to your money,” says Doug Knopper, vice president of sales and research at the Internet ad agency Doubleclick.
This Alice-through-the-looking-glass scenario is the result of an absence of agreement in the industry in just when an ad has been “seen.”
“Did the visitor see it when the ad was loaded, when the page was loaded, or when the browser made the call for the ad?” asks Knopper, whose company serves 65 billion ads a year. “Each criteria has a technical reason for serving on that. If you then multiply that by the number of different companies serving ads, it’s exponential in how many differences there can be in counting methodologies.”
The problem comes from the fact that both the site on which the ad is being served and the company serving up the ad — either the advertising agency or third-party company — each generates a report tracking impressions. But unless they are both using the same tracking system — such as Doubleclick’s DART technology — there are inevitably differences, sometimes huge.
An Industry Screwed
“You can’t do business if a pound is not a pound on everybody’s scale, it’s pretty fundamental” says Jim Spaeth, president of the Advertising Research Foundation, who has helped head up the industry’s effort to bring together competing measurement firms to find a common standard. “Despite their individual interests, the greatest common interest is a common measurement system and they’re all screwed until that happens.”
But given that the various industry players are heavily invested in their own technologies, the challenge of finding common ground is huge.
“Every company seems to have very unique methodologies for tracking what’s happening with online advertising,” says Charlie Buckwalter, an analyst at Jupiter’s AdRelevance. “This is a very complex problem and there’s no one way to do it right.”
So complex that it took the industry’s so-called FAST committee — an alliance of advertising buyers and sellers — a year and a half just to agree that the basic measurement would be, as ARF’s Spaeth puts it, “how many eyeballs on the media.”
Counting the Ways to Count
There are a plethora of subtle variations in how the ad measurement technologies track ads, but even the major differences can seem overwhelming.
At it’s most basic is the divide between server-based technologies — used by networks like Doubleclick, Engage and 24/7 Media — and those that employ human panels, such as Nielsen/NetRatings and Jupiter’s Media Metrix. Differences include ability to pick up an ad when it is served out of a proxy server, deeper levels of granularity on ads served beyond a site’s main pages, detailed demographics, and on goes the list.
“They each have their strengths and weaknesses, but there can be tremendous differences in counts,” says Spaeth. “That’s disconcerting.”
Doubleclick, for example, has a team of 35 people who spend their lives trying to sort through those counting discrepancies.
“Sometimes, it takes days or weeks to figure out because it’s so complex,” report’s Knopper, who reports that at a recent meeting, the agency’s top clients said the elimination of those conflicting numbers was among their top priorities for the coming year.
It’s also a prime goal of the Digital Marketing and Commerce Coalition, a newly-formed group that brings together the Association of National Advertisers, the Internet Advertising Bureau, the AAAA, ARF and the Director Marketing Association. The coalition grew out of the FAST committee, which was formed when advertising’s 800-pound gorilla, Proctor & Gamble, told the Internet ad industry it had better develop some standards, or else.
“It was kind of like being called into the principle’s office and told, ‘You’ve got to make this work or we’re pulling out,'” Spaeth recalls.
Along with the basic rule-of-thumb for ad measurement, the committee developed standards for ad units and online privacy.
The new coalition’s task remains largely the same: Provide a comfort level for all those brand managers considering allocating part of their budgets to the Web.
“If you’re going to move your money from a trusted medium to a new medium, it’s a career risk and brand managers are asking for proof this is a smart move,” says Spaeth, who serves on the coalition.
But at least one analyst believes that the drive for a solution to the counting conundrum on Internet time is unrealistic.
“It is a very big statement on the information industry that there’s hardly any patience these days,” says Buckwalter of AdRelevance, who argues that the rapid development of technology in a fiercely entrepreneurial environment makes it unrealistic to expect industry standards to be developed overnight.
“Eighteen months ago clickthroughs were routinely regarded as the right way to see how effective an ad was,” he points out. “Then everybody said the banner was dead. Now clickthroughs are dead but the banner is very much alive in a dynamic new market.
“This industry is growing and maturing and as it does, there’s going to be an acceptance and understanding that certain approaches work better than others.”