Last week, the media world was treated to an incredibly rare event: a journalist met a poll he didn't like.
It's not just one poll, in fact, but dozens. If Markos Moulitsas, founder of the Daily Kos website, is correct, he's been running bogus surveys on his website for months. Moulitsas hired a polling firm, Research 2000, to keep his readers abreast of popular opinion -- and to supply his website with delicious tidbits that paint the GOP as extremist (Is Barack Obama a Socialist? Sixty-three percent of Republicans say yes!). However, a closer look at the polls revealed something fishy about the numbers: they were much less random than they should have been. Some degree of randomness is inherent to the process of polling -- it's what the infamous "margin of error" is meant to describe -- and its absence in certain aspects of the polls is a signal that something appears to be wrong. Moulitsas has filed a lawsuit accusing Research 2000 of fraud.
Moulitsas is breaking with tradition. Usually, journalists accept polls without pausing to think about whether they reflect reality or not. Even when a survey produces silly, meaningless, or self-contradictory results, we seem unable to treat it with skepticism. For example, when the Associated Press ran a poll at the end of 2006 to gauge public attitudes, it ran two stories under different headlines.
The first: "AP Poll: Americans optimistic for 2007."
The second: "Poll: Americans see doom, gloom for 2007."
Instead of questioning whether the poll was producing a meaningful result, the AP decided to publish nonsense -- presumably hoping that nobody would print the two stories side by side.
Once in a while, you'll see a reporter or a pundit quietly admitting to the fallibility of a poll by mumbling some incantation about the margin of error. However, margins of error don't describe most of the problems that plague polls, such as the issue with fibbing. It's well known that people lie on certain kinds of surveys. Look up a sex study to see it in action. Heterosexual men, on average, usually claim to have many more sexual partners than heterosexual women. (Of course, the numbers, logically, have to be the same: each man having sex with a woman means that a woman is having sex with a man.) This effect isn't in the margin of error. Nor is the fact that the wording of a question has a tremendous influence on a respondent's answers. (Are you in favor of subjecting enemy combatants to rigorous interrogation? How about to torture?) Nor is the bias that comes from so many people hanging up the phone whenever a pollster calls. Nor are a myriad other effects, some understood, some not. The media treat the margin of error as a touchstone for a survey's validity, even though margins of error don't capture the vast majority of the problems that make a poll go badly awry.
Reporters think that they're purveyors of nonfiction, but polls regularly cross the boundary between fact and fantasy. How else can we explain the fact that journalists still perform internet polls? They are notoriously easy to game; they're why "Hank the Angry Drunken Dwarf" beat out Brad Pitt for People.com's "Most Beautiful Person of the Year" in 1998, why Stephen Colbert won a poll to get a Hungarian bridge named after him, and why Justin Bieber's "fans" apparently want to ship him off to North Korea for his next tour. Yet go to CNN.com, and on the front page is an internet poll. This isn't information; it's titillation.
If Research 2000 did, in fact, fabricate its polls, it crossed a line. However, that line is blurrier than we like to admit.
Charles Seife is an associate professor of journalism at NYU. His next book Proofiness: The Dark Arts of Mathematical Deception, will be published by Viking in September.