We Have No Idea Which Prison Programs Are Reducing Recidivism in America

The implication is that in America our prison systems have no idea which programs work and which programs don't. This should concern you. If we don't measure our prison programs to determine if they reduce recidivism, how can we make claims that they work?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Throughout America, we have prison commissioners and state public safety officials all stating that they are reducing recidivism because of the programs they offer in their correctional system. I disagree.

I reviewed the websites of all of the state prison systems throughout the US. I wanted to find a prison system that is correctly measuring the outcomes of their prison programs. But for one state, I was unable to find any published outcome evaluations of programs aimed at reducing recidivism published on 49 state prison systems' websites. Even this one system that did have a properly done outcome evaluation did not do an evaluation for every program it offered. I have to admit that maybe I missed reports on several system's websites; if this is the case 1) it doesn't change my overall critique and 2) reports should be easier to find.

The implication is that in America our prison systems have no idea which programs work and which programs don't. Maybe they do, maybe they don't. We don't know. This should concern you.

Think about it like this - if we don't measure our prison programs to determine if they reduce recidivism, how can we make claims that they work? Often times you hear administrators say that they use "evidence-based programs" or that their "outcomes" show that their programs are reducing recidivism. Sometimes prison administrators and public safety officials even show numbers on a page with a line showing a decrease in recidivism from one year to the next. I have a lot of problems with all of this.

Evidence-Based Programs Are Everywhere

Just because something worked somewhere else, it does not mean it works in the new location. The staff may not be trained properly or they may not have the right materials; the inmates may have other challenges that are unaccounted for; the amount of time inmates have in the program may be different; the inmates may have different risks... There may be any number of reasons an evidence-based program isn't working in a new location. Just because something worked somewhere else it does not mean it worked in the new location. The program must always be measured.

After reviewing all 50 states prison department's websites I found that most prisons report on the same data. They report on population trends; recidivism rates of the general population; the have annual reports; the have reports that talk about the different programs that are offered. Some DOCs even have reports that try to lead the reader to believe that they measured a prison program's effect on recidivism but once I look into the methodology I find that a control group was not used, or the control group was a bunch of drop outs (selection bias) or the control group was from a different year (cohort bias). Other times annual reports that claim that programs reduce recidivism offer no methodology, statistics or anything at all other than a statement with nothing to back it up.

I am not saying that "nothing works". What I am saying is very clear - there is virtually no prison system in the country measuring the outcomes of programs aimed at reducing recidivism. We know on a case-by-case basis that some programs work because there are published peer reviewed studies with rigorous methodology. Doris MacKenzie, PhD wrote an excellent book in 2006 on this very subject titled What Works in Corrections. A number of years before that, Larry Sherman, PhD published a book titled Evidence Based Crime Prevention, which dedicated a chapter to rigorous review of what works in prison programs. On a case-by-case basis, when a department of correction is able to get a researcher or a grant to measure programs, they do. The problem is that too many programs are not measured - probably more than 99% of programs are not measured for effectiveness. The problem is that we are not building into the administration of our programs useful performance indicators that tell us what the effect of a program is on recidivism as compared to a control group.

I am also not saying that the thousands of social workers, program staff and volunteers are not doing a good job. I am just saying that we are not measuring the outcomes of their efforts.

Real Life Challenges

For about a decade I have been working on this issue of proper outcome evaluations of prison programs. Here are some of my war stories.

  • I am on the public safety committee in the MA House of Representatives. In a committee hearing one public safety agency commissioner was presenting on how his agency was reducing recidivism and producing positive outcomes. Once I asked a few questions, it became clear that his agency was measuring outputs, not outcomes. An outcome is a change in behavior; an output is how many people are being served; and an input is the resources we have to work with. Administrators like to throw around "outcomes" when they are really talking about "outputs."

  • On another occasion, I met with a secretary of public safety and her assistant secretary. I explained to them how to measure prison programs' effect on recidivism at no additional cost to the DOC using the data and staff we currently had. I know what we can and can't do because I was a former director of research and planning. They said they got it and they have the staff they need to measure the prison programs' effect on recidivism. I was brushed off by these two administrators. They left office never having completed the task they said they could and would do. Fortunately, that secretary of public safety is no longer there.
  • On more occasions than I care to remember I am told that the prison or jail system doesn't have the staffing to properly measure the outcomes of their programs. I often hear this 'after' they have said they are reducing recidivism and then only after I question how they arrived at their figures. In other words, I hear too often that correctional administrators are trying to pull a fast one by saying that they reduce recidivism but have not properly measured it, or done any measurement at all. This is not good government administration. It is important to admit when you don't know something and not act like you do when you don't. Tacit within this argument that they don't have the staff to do this is that they have the staff but the staff don't have the time. I disagree. I've worked with DOC research divisions across America and the staff are there; the problem is that they lack the skill set to do an outcome evaluation. So what they do is a comparison of a program group to the general population, which is useless. There is selection bias in that and it doesn't rule out third variables.
  • In one system I worked in, working with a group of administrators from the same prison, we created a powerpoint presentation to show the governor. Because we were using evidence-based programs I assumed the programs worked and then did a cost-benefit analysis of these programs. It turns out that not a single program was cost effective. That is not reason for alarm. Sometimes saving lives costs money. What was concerning is that the data from the prison system I worked in was withdrawn from the presentation, data from a similar prison system that did see a return on investment was inserted, and this is the data that was presented to the governor. That was wrong.
  • In one system I worked in, I was told by the program staff "You can't measure my program." I responded, "If it is an evidence-based program, it has been measured before, therefore we can do it again and we should." I have also been told that post-reentry challenges are confounds and make it impossible to measure if a prison program works or not. This is not true. A proper control group will have the same post-release risk factors as the program group and therefore will be a 'wash' or have no effect on the measurement of the program.
  • When I worked in correctional system, I was actually prevented by one former boss from measuring prison programs. She said "you are going to embarrass us" if you measure the programs. The fact of the matter is that I would have probably embarrassed her had I measured the programs' outcomes as she had been there for over a decade and never once did a program evaluation. She is still employed in her same capacity and no programs have been measured correctly to determine if there was an effect on recidivism.
  • After looking at all 50 state prison systems website, I walked away saying that all but one prison system has not measured prison programs' effect on reducing recidivism. Even this one system did not do it for every program it offered. Most others systems don't even try, and the few that do try don't measure outcomes correctly. This is very concerning.

    Crime data is a science. I can't think of any way to get an effect size without a control group. Consider someone saying that a program had a recidivism rate of 50%. Is that good or bad? Compared to what? Measuring the effects of prison programs on recidivism requires very specific methods. There is more than one way to do it, but there are far more ways to do it incorrectly than to do it correctly.

    What To Do and Why

    I am currently working with a couple of jails in Massachusetts to actually put my money where my mouth is and measuring the programs' effect on recidivism at no additional cost to the systems. Using a validated risk assessment tool, I am creating a similar control group to compare the recidivism rates after release. There is far more to a program evaluation than I can get into here, but in short the risk assessment tools essentially say there is no difference in the likelihood of recidivism between the two groups except that one group got a program and the other did not. There is tremendous variation within each group and both groups have post-release challenges, but these variations are a 'wash' because they are present in both the program and control groups, and the risk assessment tool says that the two groups have the same risk to reoffend anyway. Therefore any differences in recidivism are likely the result of the treatment program. This research design is not as good as a randomized controlled experiment, however, this design (a matched pair research design) meets the minimum level of scientific rigour to rule out third variables and correlation, and get at causality.

    Why is this so important?

    • We parole inmates based, in part, on participation in treatment programs. Yet we have no idea what programs work and which ones do.
    • We spend hundreds of millions of taxpayer dollars on prison programs but we don't know what programs work and which ones don't.
    • Inmates and their families who are depending on programs to properly teach them effective skills are depending on these programs, yet proper outcome evaluations are not being done.
    • We are releasing inmates back into our communities often times without parole with the hope that they have been rehabilitated but we don't know if the programs they participated actually are effective or not.

    This is a public safety issue. This is a fiscal and budgetary issue. And this is a good government issue.

    Let's for the sake of argument say that a prison system measured the programs it offered correctly using a proper control group to compare against the treatment group. And let's say for the sake of argument that the program showed no effect on reducing recidivism in several measured release cohorts. This is good information. We want to know if a program does not work. This is not reason for alarm. What is reason for alarm is if a prison administrator ignores or tries to hide those results. There is reason for concern if those results are not used to make changes.

    A government that can take self-corrective action without being coerced to do so is the hallmark of a good government. It is a government there for the people, not for itself.

    What Can you do?

    Ask a prison system to produce evidence that their programs reduce recidivism, not somewhere else but in your state. If they say their programs work, just ask for the evidence. If they don't produce it, contact your state rep or state senator and ask them to file legislation to require the state prison system measure outcomes with a control group. We get the government we deserve for better or for worse.

    Paul Heroux is a state representative from Massachusetts on the Joint Committee on Public Safety and Homeland Security. Paul worked in jail and prison before becoming a State Rep. Paul has a master's in criminology from the University of Pennsylvania, and a master's in public administration from Harvard, and a bachelor's in psychology and neuroscience from USC. Paul can be reached at paulheroux.mpa@gmail.com.

    Popular in the Community

    Close

    What's Hot