New Study Finds Little Evidence for Relationship Between TV Aggression and Real-Life Aggression in Youth but Claims It Does Anyway

Put simply, the study's findings provide very little convincing reason to believe there's much of a link, but the author appears to over-interpret weak and inconsistent results in ways I would consider to be irresponsible.
02/16/2016 05:39pm ET | Updated February 16, 2017
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

In recent years, social psychology in general, and media psychology specifically, has taken a beating for a number of bad practices. These have included a tendency to chase "statistically significant" results, ignoring non-significant findings, and presenting very weak, trivial findings as more important and conclusive than they actually are. Myself and others have argued that these systematic, cultural problems in academic psychology are seriously eroding the credibility of our field as a science. A new study in the journal Developmental Psychology that attempts to link aggression on television to aggression in real life provides an excellent example of these problems in academic psychology. Put simply, the study's findings provide very little convincing reason to believe there's much of a link, but the author appears to over-interpret weak and inconsistent results in ways I would consider to be irresponsible.

The basic design is actually simple. Ask 467 adolescents about the TV shows they watch and get independent ratings for how much relational aggression (gossiping, social exclusion), and physical aggression (fighting) are in each show (I will give credit where it is due... the author at least gets independent ratings of show content... something, deplorably, most media studies don't do). Also have the kids (and their parents in the case of physical aggression) rate how much relational aggression and physical aggression they use in real life. Track the kids for a couple years and see if there's a correlation over time.

Problems though with this study begin immediately upon reading the literature review. The author employs a practice called citation bias, in which she mentions only articles that support her personal views about media violence effects. By now there have been numerous studies suggesting that the effects of violent media on aggression are minimal. Yet not one receives mention in this paper. This type of behavior is problematic because it suggests the author has very strong opinions about the topic... opinions which could create researcher expectancy effects that can indeed, result in spurious effects as authors inject strongly held opinions into how they conduct their analyses or interpret ambiguous results. In one recent analysis, I found that researchers who engage in citation bias are more likely to find negative media effects than those that don't. In other words, these effects may exist more in the minds of the researchers than they do in real life. Babor and McGovern refer to citation bias as one of the Circles of Hell of scientific writing and with good reason. Such behavior should not be considered good science.

But that aside, there are other clear limitations with the study. There appear to be no checks that respondents were answering questions reliably. Some respondents purposefully over report extreme responses to be funny... something called "mischievous responding," and this is known to create spurious effects. Further, this study appears to have put very little effort into controlling for other variables that might have explained any correlations between media and aggression, such as family environment, personality, peer environment, mental health, etc. Sometimes people with rougher backgrounds seek out more aggressive media. Criminologists have argued for years that media studies need to control for these things, so it's amazing to see studies that still don't.

By this point the reader might be thinking that this study found evidence for effects, and I'm trying to consider skeptical reasons why those effects could be explained as being due to other factors. But you'd be wrong. The study, in fact, finds very little at all. In the analyses, physical aggression on television was unrelated to later physical aggression -- an important finding! -- but one that gets ignored. The author then uses a relaxed (and not generally accepted) standard for "statistical significance" to try to argue for effects, but these effects would be so small as to suggest that watching violent television might be correlated with a 0.04 percent increase in later aggression (not 4 percent, but 0.04 percent... the proper statistical way of putting this would be 0.04 percent shared variance.) For relational aggression, the effects are "statistically significant" but still trivial... suggesting that relational aggression on TV is associated with a 0.36 percent increase in later aggression.

These are only the associations for one year later... the author had data for two year lapses as well but these were not reported (on presumes they were not significant). Remember... these are correlations, not causal effects... and not much else in terms of other variables were controlled. But even if we took this at face value and generously assumed the effects are causal, are we really worried about TV if it increases aggression by about 0.04 percent - 0.36 percent? Would you notice if you were 0.04 percent more aggressive today than you were yesterday? That's a good argument for television being a dead end if we're serious about preventing youth aggressiveness.

We need to do a lot better than this. First we need better designed studies that take controlling for other important variables much more seriously. Second, as an ethical issue in our field, the use of citation bias should be called out for what it is... dishonest... and not permitted. Third... and in fairness, this is true for all of psychology... we really need to have a better idea for when we've found effects that are "statistically significant" but are trivial and basically junk. At present, as a field, we're doing a lot more to misinform the general public and policy makers, and creating more myths than we're debunking. I think, as a field, we've become so desperate for attention we're promoting far too much stuff as meaningful and important to society when, in reality, it's background noise (or outright unreplicable). It's high time for a revolution of scientific principles in academic psychology. Otherwise we'll remain rooted firmly in pseudo-science.