Some critics point out that TV, radio and magazine advertisers measure ad effectiveness using research panels of users who sign up to provide feedback. Why do websites need to measure effectiveness with greater precision?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

There has been much attention and debate over Web tracking for the past decade. Many of the concerns focus around behavioral advertising, the widespread practice of ad networks that use cookies to keep track of the site's users visit in order to tailor the ads they see across the Web. Advertising trade groups claim that the practice is needed to provide ads that are relevant to users. But critics consider the ads are subliminal, unfairly manipulating users based on secret information. Others are concerned about the creation of profiles that could be used to discriminate against users who visit health-related or other sensitive sites. And some just think the ads are creepy.

Industry has responded by labeling the ads with a symbol that is intended to alert users about the tracking and how they can opt-out of the targeting. The Federal Trade Commission has upped the ante, pressing for a Do Not Track feature that would allow users to tell websites via their browser that tracking was verboten. The Obama administration has jumped in, applauding a compromise agreement by trade groups to halt tailor of ads when users activate the Do Not Track option.

The debate continues, with companies claiming the extra revenue from data-targeted ads is needed to support Web publishers and with advocates continuing to object to the practice as unfair.

The fireworks around behavioral ads have obscured many of the other, less provocative reasons that websites work with tracking companies. By setting a unique number, a cookie, on users' browsers, websites and advertisers can know how many unique users visited a website or saw an ad that was delivered across many sites. They can frequency cap an ad to make sure that each user sees the giant pop-up ad that slows access to the site only one time. By reading the same cookie relayed by a user on a website when an ad is delivered and then again when a user visits an advertiser's site, the company can learn which ads are bringing users to their site.

But what about users who see banner ads online and then end up purchasing at the store? Although online commerce is growing rapidly, most purchases are still made by users showing up in person at a store. Major offline retailers won't spend their dollars online without some understanding of whether the online ads they pay for are working. How can an advertiser who buys ads online learn that users who saw the ads are more likely to spend at their store than users who were not exposed to the ads?

Despite the challenge of the jump from virtual ad to physical store, savvy research analysts have long figured out how to provide advertisers with reports that do just that. Here is how it works: An advertiser buys ads that are delivered by an ad server on the site of a Web publisher such as AOL, Yahoo or the Wall Street Journal that has a substantial number of registered users. Each time an ad is delivered, a flag is added to the profile the publisher has about the user who saw the ad. The user's name is then hashed and the hash, with its flag, is sent to a service provider who will help join the "anonymized" data. That same service provider has been holding a similarly hashed copy of sales transactions from the retailer's customer database. The hashed users from the publisher are matched with the corresponding retailer data and a report is prepared. This summarized report tells the retailer, if the ads have been working, that customers who made large purchases or many purchases were more likely to have seen the advertiser's ads than the general audience.

This practice of tracking users to prepare summarized analytic reports such as this is now fairly commonplace. Some companies go to great lengths to ensure anonymity using encryption and third-party doubleblind processes via intermediaries to add privacy protections to the procedure. Leading companies are proud of their successful ad campaigns and publicize case studies proclaiming the prowess of their ads.

Some critics point out that TV, radio and magazine advertisers measure ad effectiveness using research panels of users who sign up to provide feedback. Why do websites need to measure effectiveness with greater precision, given the complaints about tracking? But these other media can rely on the power of their message which is supported by sound, pictures, and emotional stories. Users can recall the good ads, and the best become part of pop culture. "Good to the last drop" -- Maxwell House. "If I were an Oscar Meyer Weiner." "Mikey likes it!"

The tiny Web banner ad can hardly compete with TV, radio and magazines except for in one way, in being precisely measurable.

And that's the rest of the story.

The authors are co-chairs of the Future of Privacy Forum, a think tank dedicated to advancing responsible data practices.

Before You Go

Popular in the Community