U.S. Department of Justice, Office of Justice Programs; National Institute of Justice The Research, Development, and Evaluation Agency of the U.S. Department of Justice U.S. Department of Justice, Office of Justice ProgramsNational Institute of JusticeThe Research, Development, and Evaluation Agency of the U.S. Department of Justice

Transcript of "Benefit Cost Analysis for Crime Policy"

Opinions or points of view expressed are those of the author(s)/presenter(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice.

An NIJ Research for the Real World Seminar
Roseanna Ander, Executive Director, University of Chicago Crime Lab
Jens Ludwig, Ph.D., School of Social Service Administration, University of Chicago

February 24, 2011

Dr. John Laub: Good morning. We'd like to get started if we can. I want to welcome you to today's seminar in the National Institute of Justice Research for the Real World seminar series. My name is John Laub, and I'm the director of National Institute of Justice, and I want to thank you for taking your time today out of your schedules to join us.

Today's presentation is entitled "Benefit Cost Analysis for Crime Policy" and will feature Ms. Roseanna Ander and Dr. Jens Ludwig from the University of Chicago Crime Lab. They're going to discuss the importance of benefit cost analyses and how to make sound decisions in allocating scarce financial resources for crime control efforts. They will also discuss different approaches for identifying policy and program impacts as well as challenges on placing dollar values on crime cost and benefits.

Now, as somebody who's learning all about budgets, continuing resolutions and planning a science agency without any budget, I'm particularly interested in this timely topic, and also, as we know, all agencies are facing tough economic times, so we must do everything that we can to streamline operations, make efficient use of resources, without compromising public safety in the mission of our agencies. So I think this presentation today is going to be quite timely and I trust will be quite interesting.

Now it is my pleasure to introduce Ms. Roseanna Ander and Dr. Lens Ludwig. Roseanna Ander is the executive director of the University of Chicago Crime Lab. She has a wealth of experience focused on reducing youth violence, most notably with the Joyce Foundation in Chicago, and prior to that, she serves the public health liaison for Attorney General Scott Harshbarger in Massachusetts.

Dr. Jens Ludwig is the director of the University of Chicago Crime Lab and the McCormick Foundation Professor of Social Science [sic] Administration, Law and Public Policy at the University of Chicago. He's one of the nation's leading experts on gun policy and has published extensively about neighborhood effects on crime, early childhood interventions, and the application of benefit cost methods in crime policy.

And as many of you know, I spent 12 years at the University of Maryland before I joined NIJ this past summer, and one of my regrets was not convincing Jens Ludwig to come to the University of Maryland when he was at Georgetown, but instead he went to my hometown, UofC. I can understand why, but it is a regret.

So, with that, let me introduce both of you, turn it over to you.

Roseanna Ander: Thanks so much, John.

[Applause.]

Rosanna Ander: Thanks. Well, I'm mighty pleased that Jens came to the University of Chicago, so sorry you didn't get him, but I'm glad I did, so.

So why should those of us who do care about crime policy or who are in the crime policy arena care about benefit cost analysis? You know, isn't it just enough to know whether a program is effective? Well, for one thing, and it will be no surprise to anybody in this room, crime control itself is very costly, especially the way that we do it here in the United States. With over 2 million people behind bars in the United States, we have by far the highest incarceration rate in the world.

In 2006, expenditures on criminal justice was around $214 billion, and of course, that ignores all of the, and undoubtedly, massive indirect costs of crime and particularly those costs that are affecting the most burdened and challenged communities and vulnerable families, so government at every level is now looking for ways to reduce the onerous cost of crime control without compromising public safety.

Which leads me to the next point.

Crime itself is also very costly, and it's estimated to be 1- to $2 trillion in the United States, and again, this too disproportionately affects the most disadvantaged and vulnerable populations. Homicide is the leading cause of death of young black males, age 15 to 24, and it's responsible for more deaths than the next nine leading causes of death combined.

So how do we determine how much crime control we should do? This is one important use of benefit cost analysis. In this case, it can help us weigh the benefits of scaling back criminal justice system responses or particular types of responses against the cost of having more crime.

A second important use of benefit cost analysis for crime policy is as a tool to figure out how to get the maximum crime control for a given cost. As an example, last year, the United States spent about $100 billion on police and nearly $50 billion on judicial costs and about $70 billion on corrections. So is that the right allocation, or could we get more crime control at a lower cost? A good benefit cost analysis can help answer that.

So, for a concrete example of the importance of benefit and the utility of benefit cost analysis, we need only turn to the Abdul Lateef Jamil Poverty Action Lab at MIT, which was established about seven years ago to help use rigorous randomized evaluations and benefit cost analysis to address the seemingly intractable problem of poverty in the developing world. They currently have 51 affiliated professors around the world who are in the process of carrying out about 250 randomized controlled evaluations and benefit cost analysis, so let's look at one example of how useful the benefit cost analysis that they have done can be.

Schooling and education is, perhaps not surprisingly, an important antipoverty strategy in the developing world as it is here. In the developing world, there are huge challenges with schooling, including just getting kids to attend school regularly. So the researchers affiliated with the MIT poverty lab did a set of experiments to see whether several different strategies were effective at improving and increasing school attendance by children, and then they did a horse race between them to see which was the most cost effective.

So they looked at conditional cash transfer programs, such as paying mothers to get their children to school. They also looked at hiring additional teachers to reduce the class size because, in some of these areas, the class sizes are just enormous, 80 children and just one teacher, so they cut the class size in half with an additional teacher. And then they provided deworming treatment for children because parasitic worms are a very significant problem in the developing world, and then they looked at whether each of these strategies was effective at improving school attendance by the children.

For each additional child year of school, conditional cash transfers cost about $6,000. I guess I should say, does anybody have a guess as to which one is the most cost-effective strategy? Yeah, this is a pretty sophisticated crowd.

So, for each additional child year, it's $200 for hiring teachers which is just astounding to me, $3.25 for each additional child year for deworming.

So I'm going to let Jens take over now and describe the more technical aspects of how you do a benefit-cost analysis, but this should illustrate really why it's important to be able to compare not just what's effective but how cost effective these are.

Dr. Jens Ludwig: Thanks so much, Roseanna.

So Roseanna has convinced you that benefit cost analysis is really important. It's really important first to help us decide how much money to put into crime control versus other pressing social problems in a world in which we have very constrained and increasingly constrained resources, and the second way a benefit cost analysis is helpful is it helps us decide within our crime control budget how to allocate resources towards the most socially beneficial uses. Okay?

So what I want to do is I want to spend a few minutes talking about how you actually do benefit cost analysis in the crime policy application, which raises a bunch of additional challenges beyond what you have in other important policy areas. Okay?

Now, Roseanna and I go around and we spend a lot of time talking to practitioners and policy-makers about benefit cost analyses, and in our experience, a lot of people have the view that benefit cost analysis is basically just taking whatever sort of impact estimate that you have lying around on the shelf, and benefit cost analysis is really just about putting a dollar value on any old impact estimate that you have. And so the first thing that I want to emphasize is that a benefit cost analysis is only going to be as useful as the quality of the program evaluation upon which it's built, and I think this is a really — it's a very obvious point when it is said out loud, and yet this is something that we don't appreciate nearly as much when we do benefit cost analysis in practice. Okay?

And so I'm going to spend some time really talking about the challenges of getting convincing impact estimates as a first step, as a foundational step doing benefit cost analysis, and then you've got the second problem, which is what sort of dollar values do you put on program costs and program benefits once you have an impact estimate for a crime policy or a crime program that you actually believe. And what I want to argue is that both of these steps are way more challenging than we often appreciate. Okay.

So I'm at NIJ. This is an audience that clearly understands the selection bias challenge to understanding the causal effects of crime policy interventions. Right? Imagine that you're trying to evaluate the effects of Boy Scout participation on delinquency. Right? And we go out and we do a survey of kids on the South Side of Chicago, and then we try and figure out what the effects of being in the Boy Scouts are on your risk of delinquency.

No one is going to be surprised here. No one is going to be surprised here that Boy Scouts are less likely to be involved in delinquency, and nobody here is going to be surprised if that held true, even if I did a three-hour survey and collected as many covariates as you could possibly imagine. At the end of the day, you are still going to believe that there is something very hard to measure about these kids that is driving two otherwise observational kids. One of them selects into the Boy Scouts; the other doesn't.

You're going to be thinking show me two kids with exactly the same family backgrounds, exactly the same individual kid characteristics, the exact same school participation records, the exact same prior delinquency arrests. That one kid who chooses to be in the Boy Scouts, there's something about that kid that got them to be an Eagle Scout, and I'm thinking that it's that hard-to-measure thing that got the kid to be the Eagle Scout that is actually the thing that is reducing their delinquency, so this problem is well understood.

The other problem that's well understood is we know that randomized clinical trials like they do in medicine solve this problem. Okay. So we know when you go to the doctor and the doctor says you have high cholesterol, you should take Lipitor, the reason that you trust the doctor is because the FDA has required Merck or whoever it is that makes the Lipitor to do a randomized clinical trial, to enroll a bunch of people like this and randomly assign this half to get Lipitor and this half to get sugar pill placebo. Because of random assignment, the two, the treatment and the control group, will be identical on average and with respect to everything that we can observe and not observe that affects your health outcomes, so any observed differences in average health outcomes between the treatment group here and the controlled, the sugar pill placebo group here, will be convincingly due to the Lipitor itself. Right? We know that the randomized clinical trial will solve this problem.

What is much harder to figure out in the crime policy area is whether the only thing that solves the selection bias problem is randomized trials. This to me is like the 800-pound-gorilla question, one of the 800-pound-gorilla questions at the heart of benefit cost analysis.

And so what you see in the criminology literature and the crime evaluation literature are competing perspectives. So, in one camp, you would put people like my friend, Larry Sherman, who my other friend, Rob Sampson, has called the "randomista."

So, on the one hand, there are randomistas who think the only thing that you can do is randomize clinical trials. Right? It conjures up "Sandinista," which makes me think whoever came up with the term has some prior on who's right in this debate, but let's accept the terminology as it is. On the one hand, you have the randomistas; the only thing we should be doing is randomized experiments.

And on the other side are people like Rob Sampson. I'm going to coin my own term for people like Rob, and this is not pejorative like "randomista." I would call Rob a research pluralist: "Can't we all get along? Every sort of research design has its own sort of merits, and why rank one over the other?"

A lot of times, practitioners look at the researchers arguing about research design and think that this is just a nerdy academic point that's really completely irrelevant. It has sort of the very academic, almost religious flavor to it that doesn't really have any practical implications, and so what I wanted to do is just give you an example from a different context that might not be so familiar to a crime audience about why this question of what methods are able to identify causal impacts, why this is such a fundamentally important question for benefit cost analysis. Okay?

So here's an example for medicine. Oh, actually, see, the other thing I meant to say is when practitioners and lots of researchers think about this issue, we assume, we assume that whatever selection bias we have with nonexperimental methods must be modest. Right? So, when we assume that there is a small tradeoff between the clinical trial and the nonexperimental design, okay, and so this example from medicine is intended to highlight that there is no guarantee. That might be true, but there is no guarantee that the bias is small.

Okay. For many, many years, there was a huge epidemiological literature, so "epidemiology" is the fancy medicine term for "observational." So there was a huge epidemiological literature in public health that looked at health risks for women, postmenopausal survivors of breast cancer. On the basis of observational research in epidemiology, the medical community was encouraging women, breast cancer survivors, for hormone replacement therapy, and the epidemiological literature suggested this was very beneficial for women's health to reduce the risk of recurring cancer.

And the way that the medical people think about this is they went out and they measured every possible covariate that they thought would be relevant for selection into, so they had observational data and not a trial initially. They measured every covariate that they thought would be relevant for selection in a hormone replacement therapy. In an observational study, the women who get hormone replacement therapy are the ones who choose to do it or choose to go to doctors who recommend it. Right? They measure all the covariates they could think of that would be related to selection, hormone replacement therapy and subsequent health outcomes. Right?

In a world in which we had good theory, this sort of observational design would be great. I am one of these people who thinks that our social science theories are terrible, and so my priority is to be worried about this sort of research design. That I'm going to coin a couple terms today, one term is "regression-adjust and pray." We're going to control for the covariates that we can actually observe in our data collection. We are going to hope that any lingering bias is small. Okay?

Literally, millions and millions of cancer survivors had hormone — postmenopausal cancer survivors got hormone replacement therapy. Eventually, the federal government sponsored a randomized clinical trial.

That had to be stopped early, had to be stopped early because the treatment that the epidemiological literature said was helpful, increased your risk of cancer occurrence threefold. So it's not just that we got the benefits of hormone replacement therapy wrong by 20 percent, which itself would be problematic for benefit cost analysis when you are trying to rank interventions. Right? It's not just that we got the magnitude of the benefit a little bit wrong. We got the sign of the thing wrong, and we needlessly killed, we needlessly killed thousands of women. Thousands of women died needlessly because we gave them the wrong advice about this treatment. Okay?

Crime policy is at least as important as the area of medicine. I live on the South Side of Chicago, and I can see this every day when I go outside my house. This is a fundamentally important public policy problem, and we should be taking impact evaluation and benefit cost analysis at least as seriously in the crime area as we do in the medical area. Okay.

So what I want to argue, what Roseanna and I want to argue is sort of for a Clintonian-Obamaesque kind of third way, that you don't need to be in the randomized control trial or bust category, and that there are natural experiment alternatives that we think have good chances of giving you answers in some circumstances that are nearly as good as randomized experiments, but the key to these natural experiment designs, they're different from the sort of "regression-adjust and pray" design in that what's really important for these natural experiment designs is sort of the common denominator is that you have some sort of policy-induced variation typically, and you understand what it is that's causing some people to get the thing and not others or some cities to get the thing and not others, and I'll give you some examples.

So the examples that I want to give you are feasible examples, so these are things that come up a lot in the crime application, and once you recognize these common research designs, you will become — we will all recognize that there is lots of natural experiment money lying on the table that we could do a better job taking advantage of. So I want to talk about lotteries, discretion creating near-random assignment, and I'll say a little bit about — this is probably a terrible title, but it will be immediately clear once I talk about it, assignment through prioritization and then policy changes over time. And then I'm going to talk about the challenge of putting dollar values on these program impacts, and then I'm going to stop.

But first, I want to give a prediction. I think we're doing okay on time. First, I want to give a prediction. So economists have called this sort of natural experiment design where you have some plausibly exogenous policy change that causes some people to get something and others not. So our term for this is a "design-based research study," which is different from "regression-adjust and pray."

And so my forecast — so this is the way that economists now think about doing program evaluation when you don't have a randomized experiment. So economists have become very concerned about "regression-adjust and pray," and my forecast is that in 10 or 15 years, criminology is going to look in 10 or 15 years exactly like economics does now. That we're going to move more and more towards this design-based sort of research in criminology, and during Q&A, I'm happy to talk more about why I think that that is going to be the case, why you're going to see that convergence.

The only thing I want to confess, that you can already see glimmers of this movement in the criminology literature, but because I might be one of the first people to go on public record as making the forecast, in 15 years I am going to claim it was my talk at NIJ that actually drove this thing, and then whoever is NIJ director at the time can say look at what social good our speaker series does.

Let me talk about lotteries first. So this is a situation that is — a lot of times, the government does beneficial things for people or things that we think are beneficial where there's basically excess demand. There are a lot more eligibles, eligible cities, eligible neighborhoods, eligible people, than we actually have money to help, and in education and housing, what we often do is they use randomized lotteries to allocate valuable housing subsidies for poor families when they don't have enough to give to everybody, or they do this in public school choice systems where there is one decent public school on the South Side of Chicago and every family wants to get their kid there. They say, "We don't have any" — the fairest way, the fairest way, so this is not about research. The policy-makers in those areas say the fairest way to allocate scarce resources is to basically flip a coin.

And one example of this that we've been working on in Cook County in the crime area is we are in the middle of transforming our juvenile detention center in Cook County, and for complicated political economy reasons, that is, union litigation, right now half of the detention center is operating, half of the residential pods within the detention center is operating basically new therapeutic, cognitive behavior therapy, behavioral modification stuff, and the other half are trapped in the old status quo environment where the kids basically sit around and watch TV for most of the day. And the administrators at the detention center, they are eager to convert over the rest of the facility, but they are locked into this situation for the time being.

And so we asked them do you guys know which kids would benefit most from going to one section versus the other, and they said, "We don't have any strong priors about that." And so we said one fair way — they said, "Essentially, what we do right now is arbitrary," and we say, "If you guys don't have strong priors about who would benefit the most and what you're doing is essentially arbitrary, it would be hugely helpful for detention centers around the country if you would do something that was explicitly arbitrary and random rather than essentially arbitrary, so that we could evaluate it. And they did, and that's what we're doing right now. Right?

This required just minimal changes in what they're doing, to use a fair lottery like they do in education and housing, and it's going to make these transformations in the detention center about a million times more informative about the value of this sort of thing for the kids who go through here. Okay.

Let me give you another, another example. So this is an example where decades of corruption in Chicago, which have imposed huge costs on the citizenry, has generated a great benefit for researchers in Chicago.

So it used to be the case that we had a terrible judge payola system in Chicago where defense lawyers — so the people from Chicago are smiling at this. It used to be the case that defense lawyers would have some back deal with their favorite judge and make sure that their clients got routed to the judge with which they had some inappropriately cozy relationship, and then the judge would give their client a deal. And so, to overcome that problem, the court system in Chicago now randomly assigns all felony cases to your judges, and judges being judges, they all think they know exactly what the right thing to do is with each felony case, and it turns out they all think very different things in terms of how long the sentences should be, who should go to jail, who should go to some therapeutic program, whether these therapeutic programs are all terrible and so on.

So your outcome of seeing the judge is hugely dependent on which judge you go to, and the likelihood that you go to Judge Judy versus Judge Wapner — I think I've just dated myself by saying Judge Wapner.

So Charles Loeffler, a graduate student at Harvard, actually a student of Rob Sampson's, has taken advantage of random assignment of felony cases to judges to look at the effects of different jail and prison sentence lengths on your subsequent labor market outcomes, and I think he's going to look at recidivism risk as well, does spending more time in prison have a brutalizing effect and lead you to increase your risk of recidivism, or does specific deterrence operate a longer prison sentence scares you straight. Okay? And there are lots of examples in the criminal justice system once you start to look for them that have this sort of flavor of random assignment to decision-makers who use their discretion differently. Okay.

Now, the other thing that we do in the criminal justice system a lot is we are nervous about giving people discretion, and so sometimes we try and systematize the treatment allocation process where, for instance, in juvenile detention there is a movement towards juvenile detention facilities to use explicit screening tools to identify risk of recidivism or whatever else for kids who come into the facility and then use pre-specified cut points to decide what kids get. There are lots of other examples in the criminal justice system as well.

Now, think about what happens in this situation. You've got two kids who come into detention, and one kid gets a 39 on the YASI or the MASI or the MAZI or whatever the screening tool is. One kid gets a 39 and gets not sent to detention or whatever it is. The other kid gets a 40 on the risk assessment and gets sent to detention. The 39 and the 40 on the risk tool are going to be really, really similar with respect to almost all of the background risk factors for recidivism and all the other outcomes we care about, and so we can use this sort of natural experiment caused by the desire to cut down on discretion to do what we call a "regression discontinuity design." And there are lots of examples of this floating around in the criminal justice area, that once you recognize that this is a way to overcome the selection bias concern, this is easy, natural, experiment, money on the table. Okay.

The final example that I want to give, this is something that lots of people already do in criminology, and so I just want to make one small observation about this. The thing that we already try and take a lot of advantage of in criminology are policy changes, to do basically difference in difference designs, where some jurisdiction changes their policy and another jurisdiction doesn't, and we compare the pre/post trend in the jurisdiction that changes with the pre/post trend in the jurisdiction that doesn't change. Okay.

And the one small thing that I wanted to point out about this is one way that we can try and check to see whether our difference in different designs is giving us the right answer or not is to basically do the difference in difference estimate during the pre trend.

So my friend, Phil Cook at Duke, and I wrote a paper in 2000 that evaluated the Brady Act, so this was a federal law in the early '90s that caused some states to have to change their background check policies, and a bunch of other states already did it. There's the control states or the new treatment states. Okay.

And John Lott wrote a paper or in his More Guns, Less Crime book, he had a couple pages where he said the Brady Act causes crime to go up, okay, and he did that looking at overall crime rates in Brady states and non-Brady states once these laws, the Brady Act, went into place. So he's using the same research design that Phil and I are using. Phil and I claim that Brady had no detectable effects on homicide rates.

Why do we get different answers? Here's an example of why we get different answers and why specification testing can give you a glimmer behind the curtain about whether your nonexperimental research design is giving you the right answer or not as a way to set up benefit cost analysis.

Look at the bottom two graphs for starters. Let me see if I can do this. This is the homicide rate for people under 21 in the control group states, and this is the homicide rate for people under 21 in the treatment states, the states that had to change their gun regulations as a result of Brady. Okay? And so we think that people under 21 shouldn't really be affected by Brady or shouldn't be first order affected by Brady because they couldn't walk into gun stores even before Brady, so requiring gun stores to do background checks shouldn't have a first order effect on people under 21.

So, if our difference in difference research design worked, you would expect to see parallel, you would expect to see parallel time paths for juveniles before and after, after if they're not affected, and before, they should have parallel time paths before if the control states are a good control group for the treatment states.

Think about what the difference in difference research design does. It says I am going to attribute any difference, any difference in trend pre/post for the treatment states versus the trend in the control states. I am going to call any difference in trend there, I am going to call that the "policy effect." That assumes that the two groups of states would have had similar trends, right, had there not been a policy.

We can verify that by looking to see whether they have similar trends before the policy goes into place. Right? And look at the juvenile homicide rates in treatment and control states in 1985. Look at the juvenile treatment control difference right before Brady goes into effect. You can see it is dramatically wider in 1990, I guess that's 1993, than it was in 1985, or put differently, or put differently the treatment control pre trends are very different for the treatment and control states. In a world in which you think that is mean reversion, so crime rates are often like Qualcomm, what goes up the most has the farthest rate to fall, right, you see the control states had unusually large increases during the pre-Brady period.

John Lott lumps the adult and the juvenile trends together and says, "Aha, the control states, the no-change states had larger declines after Brady than the treatment states." That is his basis for concluding that Brady had adverse effects on crime. That is all driven by the juveniles here. You can see the adult trends are identical pre and post. That's all driven by the juvenile trends, and that difference post is driven all by the fact that crime rates ran up more for the juveniles in the control states beforehand.

So the point of this is, the point of all of this is that a really good impact evaluation first is the most important thing for doing a successful, useful benefit cost analysis. We have at least these four design-based options for identifying policy or program impacts, and we have some ways of figuring out whether these designs are working well; that is, giving us answers close to a randomized experiment. And I think there is lots of room for us to do a lot more of this in the crime policy area.

Now, there is the second crucial step of doing a benefit cost analysis now, which is — so program evaluation tells you what is the effect of a crime policy or a crime control program on outcomes in their natural units, arrests, crime victimizations, people sent back to prison or jail or whatever it is. Right?

If you want to do a cost benefit analysis, the cost side of the equation is already in a dollar metric. Right? So we measure our cost. What is the cost to NIJ of holding this event? We can look at the budget, and we can see there are chairs and there are water bottles and whatever it is. Most of the things on the cost side are already monetized. Most of the things on the benefit side of crime control programs, you can't look at a bill. There is no market price that already tells you the dollar value of the benefits of these interventions. Right? And you need to have them both on the same metric, so that they can be compared.

So the use of dollars is not driven by the view that money is the only thing that matters, right, nor is the use of dollars for benefit cost analysis driven by the view that tangible stuff that can be priced in a market is the only thing that matters. The use of the dollar metric is driven entirely by the need to have the benefits of crime policy interventions in some sort of scale that is on the same scale as the cost of these programs, and in fact, a good benefit cost analysis will obsess about getting the intangible costs of crime right, and it turns out that the intangible costs of crime, the stuff that's not already priced, the intangible costs of crime wind up driving the whole thing.

So Phil Cook and I wrote a book in 2000 that tried to estimate the total social cost of gun violence, and we estimate that the social cost of gun violence through the early to mid '90s was on the order of about $100 billion per year. It's a little bit hard to figure out exactly what share of that is the tangible costs, but I would be very surprised — so the public health researchers think about the costs of gun violence, and they look at the tangible stuff, medical costs and lost earnings. And based on the stuff that Phil and I did, I would be surprised, I would be surprised if the tangible stuff was any more than like 5 or 10 percent of the total cost. Right?

Just think about the families at Columbine High School. Right? Imagine that you go to a family at Columbine High School and you told them, "Good news. I have an insurance policy that is going to fully reimburse you for all the out-of-pocket expenses associated with your child being shot and killed at Columbine." What decent parent would care even one iota about being compensated for their out-of-pocket costs? Right? That is the easiest way, at least for me, to appreciate the idea that the out-of-pocket stuff is really just small potatoes compared to the real social cost of crime.

Now, this presents us with a challenge because we need to — as I said before, in order to compare that to cost, we need to put those intangible costs onto the same metric, that is, dollars, so they can be compared to the costs.

So let me spend just a few minutes talking about the different ways to do this and the challenges, but before I talk about the different ways to do this, let me step back and say a word about what the objective is for conceptually what we're trying to do with benefit cost analysis. This seems like a good place to remind us of that.

If you've been reading the papers, you know that my hometown of Chicago has a new mayor. So, on behalf of all Chicagoans, I can say we are all looking forward to learning a lot of new four-letter words over the next four years.

[Laughter.]

And the challenge that — and youth violence in particular has been a huge problem in Chicago for the last couple of years. We've had hundreds and hundreds of Chicago public school students shot, so this is one of the major policy priorities in the city. The city is broke. We have a $600 million budget deficit. So the new mayor has a lot of very, very difficult budgeting problems.

So, in Washington, they have a saying, "budget is policy," or something like some version of that; is that right? And so the mayor, one of the mayor's most fundamental problems is to think about how they do their budgeting, what priorities get more money versus less money in a world in which you've got to cut $600 million out of the $6 billion city budget, so that's a lot of cuts if your base budget is $6 billion.

Think about what people in Chicago get when Mayor Emanuel puts more money into crime control or interventions that control crime. So what he is doing is he is writing a budget for next fiscal year. He could make the crime control budget in the city $50 million larger or status quo or $50 million less.

What is the benefit to all of us in Chicago? So then us as voters and taxpayers and residents of Chicago, we're trying to decide whether we want Mayor Emanuel to do that, and Mayor Emanuel is trying to decide whether he wants to do that or not. And benefit cost analysis is the tool that we have to help the mayor think about that problem.

What is the benefit to the City of Chicago from putting more money, relatively more money into crime control in next year's budget? The benefit that we get is that every one of us who lives in Chicago experiences a decline in the risk of crime victimization next year.

Now, nobody knows. I mean, I know that I live in Hyde Park on the South Side of Chicago, so I know that I'm at elevated risk of crime victimization compared to my friends who work at Northwestern and some people I work with at the Crime Lab who live in these nice leafy North Shore suburbs. Right? I know I'm at higher risk than Roseanna is and my Northwestern professor friends, but none of us knows exactly who is really going to be victims of crime next year.

We only know our relative probabilities, and when Mayor Emanuel makes his crime budget, what he's doing is he is affecting our probabilities, and so the benefit cost question that we are trying to answer to guide Mayor Emanuel's decision is what is the value to people in Chicago from changing our risk of crime victimization, so this is what Phil Cook and I call in our book, we call an "ex ante perspective." This is the problem that policy-makers are trying to solve. We're trying to allocate money to the crime budget. From an ex ante perspective, we don't know who's going to be victimized, and all we're doing is moving around those probabilities, and so we want to figure out what the public is willing to pay, what is it worth to the public to change those probabilities a little bit in one direction or the other. That's the benefit cost problem that we're trying to solve.

So, with that as the target, let's think about the three different methods that have been used in the literature now. Okay?

So Mark Cohen, he's an economist at Vanderbilt, now at Resources for the Future. He has done most of the path-breaking work in assigning the dollar and monetizing the social cost of crime. This is an extraordinarily difficult problem, and so Mark, I hope one day wins like a congressional medal of research heroism or something like that.

The first thing that Mark did is — in a world in which you have nothing and intangible costs are really important, I think this was a reasonable first thing to try, which is Mark recognized that there is an environment that already puts dollar values on the intangible cost of crimes, which is jury trials. There are some street crimes that wind up going to have civil litigation as part of it as well, and a jury is tasked with the job of figuring out damages. And so Mark had the clever idea of using those dollar values from the jury awards.

There are two problems with that. One of the problems with jury awards is obvious. Given the strong correlation between poverty and your risk of being a criminal offender, most of our criminal street crime offenders are basically judgment proof. It would not be worth your time and money to take your average street crime offender to civil court for damages because they just don't have anything. Right? So the share of street crimes that wind up in a civil case with a jury awarding damages, that is a very, very, very, very, very, very non-representative set of all jury. Okay. So we want to be very careful. It's going to be very difficult to extrapolate from that to all sorts of other crimes.

But there is a second way in which the jury awards are problematic. The problem of the jury, the jury is asked to answer a different question from the question that Mayor Emanuel is answering. Right? Mayor Emanuel is saying, "I am going to reallocate the budget to change your risk of crime victimization in the future, and what are you willing to pay to effect that ex ante change?" The jury has a victim. The jury has a victim that's identified already, so this is an ex post perspective. The crime has already happened, and the jury, I believe, is asked come up with a dollar value that would make the victim whole. Right?

Now, if we took that answer seriously, let's go back to Columbine and you have a parent whose kid was shot and killed. Now imagine this is a civil case against like one of the gun manufacturers. Suppose that the jury found the gun manufacturer guilty. I mean, I'm not taking a stand on whether they should be, but let's just suppose for the moment they were.

Now imagine you're on the jury tasked with the job of what is the dollar amount that would make this parent home. If you are a parent, you will think the answer is infinity, that there is no dollar amount that would make me indifferent between getting that payment and losing my seven-year-old daughter, but that's the wrong question to ask, because in the real world, if that was the question that was relevant for benefit cost analysis, then the answer for Mayor Emanuel, if the benefits for crime control are infinity, that's all the City of Chicago could do, but that's not the way that we actually live, right, or this is not the way we live our lives as individuals. I actually do leave my house in Hyde Park periodically to go grocery shopping, even though that exposes me to some risk of crime victimization, and Mayor Emanuel really does put money into parks instead of crime control because the risk of crime victimization is only one of the things. So we are willing to bear some risk of crime victimization in a way that's very different from this jury award question of what would it take to make the victim whole, so unrepresentative set of cases and it's answering the wrong question.

Now, the other thing that we have tried to do in the cost of crime literature is look at people's behavior in the marketplace. In effect, lots of us in the market, in the private market already make decisions that are revealing our willingness to pay for the risk of crime.

So I live in Hyde Park. We are in the middle of the South Side of Chicago, surrounded by very high crime, high-poverty neighborhoods. Even within Hyde Park, where you are is very highly related to the likelihood that your home will be broken into or that there will be a street crime on your block, and I can tell you that when people move to the University of Chicago and they think about where they should live, they look at those crime maps and they think very, very hard about where they are willing to live with their families. And I can also tell you that when you go to the Chicago Tribune website and you look at the house prices, you can very clearly see a gradient between home price and safety.

Now, the challenge, there are two challenges. So the nice thing about that is, you know, for most of us, the home that we buy is far and away the most important expenditure, the most important market transaction, aside from choosing a job, that we will ever make. So you have very, very — and we care about our families, most of us. So you have very strong incentives to get that thing right and to think very hard about what you're really willing to bear in terms of your crime risk.

There are two problems with using house market data relating variation of house prices to variation of crime risk or labor market data. Taxicab drivers have much higher risks of crime victimization than university professors, so holding all else equal, for different jobs you can try and back out the wage premium associated with a slightly higher risk of crime.

There are two problems with this type of way of figuring out the intangible cost of crime. One problem is omitted variables. In Hyde Park, houses that are more towards the lake on the eastern side have much lower crime victimization risk than houses that are on the west side near Cottage Grove, so near the Washington Park neighborhood where the homicide rate is like 20 or 30 times what you see in Hyde Park. The closer you get to the boundary between Hyde Park and Washington Park, the higher your crime rate.

The problem is that lots of other things are very different for the housing stock on the east part of Hyde Park as well. We're closer to the lake. We're closer to the bus lines and the train. The housing stock is fundamentally different. We're closer to the Lab School at the University of Chicago and so on and so on. So, when researchers run this sort of regression and they get house price data, the house price data that you can get never captured all of the relevant amenities that co-vary with your crime victimization risk. Right? And so, when we run that regression, we are at high risk of confounding variation across housing units and crime risk with variation in other home characteristics or neighborhood amenities.

And here again, the second more fundamental problem, though, is that the housing market data, again, like the jury award data, answered the wrong question. Think about the question that I am answering when I think about whether I want to live in the eastern part of Hyde Park or the western part of Hyde Park. The question that I am answering is how much more am I willing to pay for a house to reduce the risk that I and my wife and my seven-year-old daughter and my sweet rescue mutt Trixie are victims of crime. People do terrible things to dogs in Hyde Park, too, sadly, so I was even thinking about Trixie when I was buying my house.

Now, think about the question that I'm thinking about when Mayor Emanuel is deciding how much money to put in the crime budget. When Mayor Emanuel puts more money into the crime budget, that changes my risk of crime victimization, but it also changes the risk of crime victimization for everybody else in the City of Chicago. So the answer of what would I pay for a 1 percent decline in my own crime victimization risk is different. That's a different answer. That's a different question that would generate a different answer from the question of what would I pay for a 1 percent decline in crime risk in the whole City of Chicago, because not only do I experience a 1 percent decline in crime risk, but every elementary school kid on the South Side of Chicago experiences a decline in crime risk as well. And so the housing market and the labor market data don't capture these altruistic benefits that I get from other people's reduction in crime risk. Okay?

So the final thing that I'll just say about this, there is a third way here. There is a third way to try and put a dollar value on crime risk, which is called "contingent valuation," where we construct hypothetical market scenarios as part of the surveys and directly ask people what would you be — Mayor Emanuel has just been elected in Chicago. He is going to use a lot of four-letter words, and he is trying to think about how much money to put into the crime budget. What would you be willing to pay for X percent reduction in crime in the City of Chicago next year?

The nice thing about contingent valuation is it is aiming at the right — conceptually aiming at the right target. That is exactly the question that we want the answer to, to guide crime policy. The challenge, right, the challenge with contingent valuation is that it's basically house money. It's hypothetical behavior. So, if someone calls you up and they say what would you be willing to pay for a 30 percent reduction in crime in Chicago next year, you probably haven't thought about — you've thought about what you're willing to pay for your family. You haven't necessarily thought clearly about your altruistic benefits for what you'd pay to help other people in Chicago, and who cares what answer you give to the survey? You're probably not answering the phone anyway, and if you're the 1 in 100 people who will answer the phone and participate in the social science survey, so you're already one of those outliers, right, there's on incentive for you to give us an honest answer. So it's hypothetical behavior, but it is aiming at the right target.

So it seems to me that one of the key priority areas for crime research and making advances in crime policy is to learn more about the degree to which contingent valuation actually works.

Now, environmental economists have been doing contingent valuation for decades and have thought very hard about this, and there is a little subliterature in environmental economics that tries to compare hypothetical responses on contingent valuation surveys to real behavior. They are very clever, and they ask people hypothetical questions that map onto behavioral decisions that you can see for people.

In principle, we can do that in the crime area as well. I could call John Laub and say, "What are you willing to pay in terms of a higher or lower house price for a higher or lower crime risk?" That is not exactly — that itself is not exactly the crime policy question we want to answer, but I can see him answer that question in a hypothetical way, and then I can look to see what he paid for his house in the D.C. area. And I can start this exercise of comparing real behavior with hypothetical behavior to try and understand whether this is in principle a procedure that could be really useful here. This seems fundamentally important here.

Laub: Thanks so much, Roseanna and Jens. When I met with Jens and Roseanna two months ago, we talked extensively about this Crime Lab idea, and they actually told me about the forensic science mail that they get all the time.

So we'd like to open up the floor for questions.

Katrina Baum: Hi. I'm Katrina Baum from NIH, and I have a question about one of the slides earlier in your presentation in the firearms. And I confess I probably did read the paper back in 2000, and I just don't remember the actual meat of it, but when you were talking about the firearms study, you said that you thought probably about 5 to 10 percent of the cost of crime for firearms, gun violence was intangible. Is that what you said now?

Ander: Other way.

Baum: The other way.

Ludwig: I would say 90 percent-plus. I would say 90 percent-plus.

Baum: Is intangible?

Ludwig: Is intangible.

Baum: Okay. Then I just misheard, so you don't have to answer my question.

[Laughter.]

Ludwig: The other thing that Roseanna just whispered to me is the More Guns, Less Crime book that I referred to was by John Lott, L-o-t-t.

Baum: Right.

Ludwig: Not John L-a-u-b.

Baum: Right.

Ludwig: Just to clarify that.

Baum: Okay. All right.

Laub: Thank you.

[Laughter.]

Laub: If any of you know Frank Zimring, I had to call Frank once, and I said, "Hi, Frank. This is John Laub." He says, "John Lott? What the hell are you calling me for?"

[Laughter.]

Laub: Yes

Mitch Downey: Hi. Mitch Downey, Urban Institute. I have a question about cost benefit theory versus practice, and cost benefit theory says that, one, you should include all social costs including costs that are not directly monetary, and these are the intangible costs you're talking about. And cost benefit theory also says that you want to value everything at marginal cost and not average cost, but I think in practice, we usually don't know marginal cost, and so we approximate them with average cost.

And I'm just curious. To the extent that you are presenting your results to policy-makers and actually trying to influence policy, which is your goal at the Chicago Crime Lab, do you think that your audience understands the difference and understands that the benefits which you project are averages and will not necessarily be realized and don't exactly apply to their budgets, or do you think there is some disconnect in the way they are thinking about it?

Ludwig: That's a great question. Let me give you a three-part response, in case that's helpful.

So one of the other things that your question sparks that I should have said during my presentation is one of the other benefits of benefit cost analysis. So there are two types of things that crime researchers sometimes do, is cost effective analysis which says for a given dollar how much crime reduction can I get from Intervention A versus Intervention B, and then there is benefit cost analysis which is to put dollar values on the benefits and look at the benefit cost ratio for interventions.

The nice thing about cost effectiveness analysis is that it doesn't force you into the difficult you've got the cost in dollar terms and then you don't have to monetize the crime impact. The problem with cost effectiveness analysis is that we have a bunch of — within our menu of things that reduce crime. I'll eventually swing back, but let me — within the menu of things that we have to control crime, there are interventions, so there's like putting more cops on the street where we think the main impact is going to be reducing crime, but then there is also like getting kids to go to school and graduate from high school, where reduced crime is one of the important outcomes but not the only outcome. And the high school graduation, which is sort of a secondary benefit if your main focus is on crime, is still worth a lot to society. So the nice thing about benefit cost analysis is it gives you a more comprehensive picture on what's on the benefit side of the ledger, which I think is related to this idea of tangle, intangible, and making sure that we are casting an appropriately wide net when we do these sorts of analyses.

On the marginal versus average cost question, I think on the cost side, I am happy to be corrected if I am thinking about this the wrong way, but the way that I usually think about this is most of the stuff that we get on the cost side comes from the market, from market prices. What is the dollar price of a bottle of water? What's what I pay at — I guess we're in D.C., so I'll say Safeway. And economists usually think about the market price as being the cost to society on the margin.

So, on the cost side of the program, we usually think we are getting something or we typically think for most things we are getting something that's sort of close to a marginal cost. On the benefit side, it's harder. The housing market data in principle is also giving us prices, those are market prices, and giving us prices on the margin, if that makes sense. The problem with that is those are the two problems that I mentioned before.

When we do contingent valuation surveys, I think one practical implication of your question is when we do contingent valuation surveys we do not want to ask people what would you be willing to pay to get rid of crime. Right? Because we don't have an intervention, we don't know how to do that? We do know how to do that. We would preemptively incarcerate everyone in the United States. Right? That would get us to a zero crime rate. Short of that, we don't know how to do that, and that's now what we're going to actually do.

The miracle cure that we have right now for crime might at best reduce crime by like 5 or 10 percent, and so, when we do these contingent valuation surveys, what we want to do is we want to be asking people what would you be willing to pay for a 5 or 10 percent reduction in crime rather than a hundred percent reduction in crime. So I think that is how the practice of benefit cost analysis can be sensitive to this issue of the distinction between marginal cost and average costs.

Laub: Follow up to that? Please.

Downey: Yeah. I just want to clarify. So one of the benefits that I had in mind, one of the key benefits of most crime reduction programs is reduced incarceration, and so, typically, the way that one would value that is you've prevented 10 people from being rearrested. One of them probably would have gone to prison for six months. The cost of incarcerating someone for a year is $40,000, so you saved $20,000. But if you just take the average cost of putting someone in prison, I mean, a budget is not impacted by a single reduction in incarcerations, and I'm just curious how we can sort of improve on those methods and get towards a more accurate answer for the tangible benefits that criminal justice agencies might realize.

Ludwig: Yeah. I think your question is great at highlighting why benefit cost analysis in the crime area is such an important and rich mine for future research.

I would argue that criminal justice agencies would be misguided if what they focused on was the narrow tangible benefit of less spending on prisons. So let's bracket for the moment the distinction between the — so I agree a hundred percent with your concern about the distinction between the marginal cost and the — the marginal prisoner, the cost is like the baloney sandwiches that they give them three times a day and the blanket and the pillow or whatever it is. Right? And the average cost is very, very different, obviously.

So I agree completely, but let's bracket that for the moment. It is plausible, it is plausible that the intangible cost of incarceration could be a larger proportional share of the direct government budget cost, the impact of incarceration. So, if you have looked at the survey data on the prevalence of prison rape, I might be willing to punish someone with six months in jail, so deprive someone of their liberty for six months in exchange for stealing my car. Whether I would be willing to deprive someone of liberty for six months and expose them to a 15 percent risk of being raped in prison is a fundamentally different question. There are impacts of incarceration on the community and so on and so on.

So I think an agency would be conceptually mistaken to focus just on the government costs, but in practice, what that agency needs to know is what the public — what it is worth to the public to have marginal declines in incarceration, what is the public willing to pay to have the same amount of crime control but fewer people in prison, what is it worth to the public to not impose these intangible social costs on destination neighborhoods to reduce crime through social policy interventions instead of incarceration. Right?

So I would say the solution to this, one solution is to think about if you're thinking about an intervention that is going to reduce crime by 10 percent, that is probably going to be — depending on the jurisdiction that you're talking about, there is going to be some lumpiness there where you can close a wing of a prison, and so you might be able to realize some capital gains, but I think that point leads into the bigger point that we want to be thinking about, contingent valuation and whether it can get us something useful on these other sorts of questions, like what are the intangible costs of incarceration above and beyond the intangible costs of crime reduction. I don't know if that helps.

Laub: Other questions? Please.

Nicholas O'Brien: Hi. My name is Nicholas O'Brien from the New York City Office of the Criminal Justice Coordinator. My question is about the perception of crime and interventions and programs that's sent to reduce or increase — you could have a drop in crime and similar perception of crime or vice versa — and how you go about monetizing some of those effects and bringing that into cost benefit analysis.

Ludwig: So is the question, is the question motivated by the concern that perceptions of crime are weakly correlated with the actual crime rate? Let me give you a partial answer, and this might help.

Wait. No, don't sit down yet because I'm sure I'm going to get it wrong.

[Laughter.]

Ludwig: Let me give you a partial answer that will probably be not perfectly responsive, but it will highlight where my — and then you can sort of clarify what I didn't answer.

So the way that I think about this is that if you ask people what their actual risk of crime victimization is at a point in time, their perception of their level of crime victimization risk might be different from their actual level of crime victimization risk, but I think at least as important for crime policy analysis is that they perceive changes. So, even if they get the levels wrong, if they are perceiving the changes in the right direction, then I think that we're still okay for the purposes of benefit cost analysis and crime policy evaluation.

And one small data point that makes me think that people might — they might not get the changes exactly right, but I think that their perceptions are at least correlated with the change sin the real thing is my University of Chicago colleague, Steve Levitt, and Judy Cullen, who is an economist at University of California at San Diego, wrote a paper in 1999 looking at what crime does to city populations, and they find that every additional Part I index offense that happens in a city reduces a city's population by one, holding all other determinants of city population flows constant, and interestingly, they find that ever homicide that happens in a city reduces the city population by 70, which sort of is consistent with my own view from living in Chicago that it's really lethal violence that's the thing that generates the largest social harm. And I think their research design is pretty convincing, and so if you believe their research design, that's one data point in the direction of even if we get the levels of risk wrong, when the actual risk changes, the change in my perception is correlated with the change in the actual level.

Is that at all —

O'Brien: That's speaks to — it was largely motivated by sort of the value of PR and how much it's worth sort of selling crime drops and crime effects to the public when they are happening, and is there a value behind that or is it better just to actually try and reduce crime.

Ludwig: What's the way to — so I would do a — as an economist, I think about everything. You know, my wife asked me if I want to walk the dog, and I do a quick benefit cost analysis in my head.

[Laughter.]

Ludwig: So the way that I would think about the answer to your question is I would think about a benefit cost analysis of informing the public and a benefit cost analysis of actual crime reduction, and if the intervention that you have for informing the public is misleading the public, I would put a very high cost on that, but I am sure that that is not what you have in mind. Right?

My sense is that the resource costs to the government of better informing the public are very low. Right? And so, in some sense, those need not be rival. I would think of those as there's no sort of intrinsic crowd-out.

Mayor Emanuel's crime guy can spend a half hour every month meeting with the main crime reporters for the Chicago Tribune and the Sun-Times, making sure that they have an accurate understanding of what the monthly crime data looked like from the Chicago Police Department, and that would not have any serious crowd-out effect on all the other things that the city is going to try and do to actually reduce crime.

I think it would be a terrible idea for a city to start — I mean, you can sort of imagine. Anything that went beyond that would — well, let me just stop there. Okay.

Attendee 01: I had a question sort of going back to the randomistas versus pluralists, and I guess I'm not quite clear where you fall. I think you said that you fall in between, but if I understood your discussion, it sort of sounded — you sounded pretty firmly in favor of randomization either intentionally so or through, you know, the use of, you know, social experiments, and so where do you fall on that continuum? It sounds pretty close to randomista. Am I getting that wrong?

Ludwig: Yeah. Here's the way that I would — here's the way that I think about it. Here's the way that I think about the continuum of research designs, and then here is — I'm just inventing the term, so someone else invented "randomistas," and I'm inventing "research pluralists," so I'll define, I'll tell you how I'm thinking about that term.

So, on the one hand, on the one hand of the continuum, there is the randomized clinical trial. On the other end of the continuum, there is the "regression-adjust and pray" observational study, and then in the middle, there are natural experiment studies, like difference in difference, especially difference in difference where you pay attention to see if the treatment and control groups have similar pre trends. There are regression discontinuity studies, and there are natural experiment studies where you use randomization like lotteries, as I discussed, and randomization of cases to decision-makers that have discretion. So I put all of those natural experiment studies in the middle.

I think I interpret the randomista's perspective, as it is commonly characteristics, as only randomized control trials. The extreme version of this is RCTs or bust; anything else is just not to be trusted. And since I'm defining the term "research pluralists," the view of research pluralism that I have in mind, that I wouldn't agree with, is that everything on that continuum is equally good. This is like all children are beautiful to someone.

The "regression-adjust and pray" end point of the continuum makes me deeply suspicious, makes me deeply nervous in part because of the hormone replacement trial case and in part because for the last 15 years, I've been working on a HUD randomized experiment called "Moving to Opportunity" that takes families from some of the worst housing projects in the country, places like the Robert Taylor Homes on the South Side of Chicago, and via random lottery gives some families but not others the chance to move to much less distressed communities.

And I have taken the MTO data, and I've analyzed the data using the randomized clinical trial design, and then I've also reanalyzed the data nonexperimentally, pretending that we didn't have an experiment, looking to see whether "regression-adjust and pray" or propensity score matching would give us the right answer. That's that. I'm going to put propensity score matching on that end of the continuum as well.

And the answer, the comparison of the "regression-adjust and pray" and the experimental answer are scarring. I am still in therapy to recover from that experience. The first time you look at your regression output and you see how far off the regression, even with lots of covariates or a well-done propensity score matching or inverse propensity score weighting method or whatever it gives you, how far off it can be and how often it gives you the wrong answer, that has really made me think we need — I wouldn't be on the end of the spectrum that says just experiments, but I am deeply nervous about the "regression-adjust and pray." But I think that the natural experiment stuff in the middle is sort of a third way that the two polar ends of this debate need to be taken more seriously.

And I think at some point, there is going to be a paper in criminology that scars everyone else in criminology the way that I have been scarred looking at the Moving to Opportunity data. Someone is going to write a paper. Maybe this is what Roseanna and I will do on the plane back to Chicago. Someone is going to take some data, some experimental data in criminology on a topic that every criminologist cares about and has intuition about on the basis of — and they are going to do this sort of experimental/nonexperimental analysis and show people how far off even our best sort of "regression-adjust and pray" estimate is, and that is going to be the same scarring experience that many people in economics have had that have led us to focus more on design-based estimates.

Laub: Jens, like most of our good ideas, Dick Burke wrote that paper.

Ludwig: Oh, he wrote that paper.

Laub: And he said the bronze standard is the gold standard.

So Roseanna wants to jump —

Ludwig: Yeah, yeah.

Ander: I was just going to say I think one of the things that people say often when we talk to policy-makers, their resistance to doing sort of really rigorous evaluations is always going to be really too hard, we can't do it, and once you talk to them about the different ways that it might be done, it's really amazing how feasible this is in many, many, many circumstances.

So two examples were exactly the State of Illinois and then also Cook County applied for funding and got it under Second Chance, and they were going to deliver a particular program that seems really promising, in one case to juveniles and then another case — actually, I think they were both juveniles. And they were going to have nowhere near enough resources to serve every eligible kid, and they were just going to sort of arbitrarily pick which kids they were going to give it to, and they had asked us if we would help do a pro bono evaluation.

And in talking to them, it became really clear that this is, you know, a very easy way to do a randomized control trial that will be so much more valuable in what we are going to be able to learn from it than if they had just sort of almost randomly without knowing it, arbitrarily delivered the services to kids. And I think there are just so many examples where you start to talk to agencies, they almost never have enough resources for every kid, family, et cetera, that is going to be eligible. So it's really — from my perspective, this is partly just educating people about how to do this in a way that doesn't make them do contortions to try to fit the research design.

Ludwig: Or another way to say that is every pilot program in principle could be done either as a regression discontinuity study or as a randomized trial.

Laub: Maybe I could take the last question.

Ludwig: Yep.

Laub: It's for Roseanna. Since Jens was willing to go on tape making a forecast about the future of criminology, God save us, being taken over by economists, how many years will it take for you to come back to the National Institute of Justice and put a slide up comparable to what you did for child education with deworming as the most cost effective in criminology?

Ander: Well, you know, I think that we're actually, you know, moving a good way down the road in identifying some really promising programs. I mean, I guess the good news and bad news is this is sort of uncharted territory, and there's lots and lots of opportunity. So, if we want to look at things like what's the best way to improve schooling outcomes for kids because we see that as a crime reduction strategy, there are a lot of sort of promising things. So I would say, what, five years we could put up something that sort of compared? Maybe not having evaluated everything, but really have some actionable things that people can do, sort of a menu of different options for ways to at least reduce, you know, youth violence.

Ludwig: Can I add two very quick friendly amendments? So five years is exactly the number that I was talking about. So, hopefully, hopefully, John, you will still be NIJ director in the second Obama administration. You can invite us back in five years, and we'll share that slide.

Laub: We're going to sign it.

Ludwig: Sign, sign the contract.

[Laughter.]

Ludwig: But the second point that I wanted to make is more fundamental — or maybe in the Palin administration, they'll invite us back to share the slide.

[Laughter.]

Ludwig: But the second point is more fundamental, is that you're never done. So we showed you the MIT poverty rate slide that shows that deworming had a cost of $3.25 per extra kid-year of schooling. That's the cost of — Roseanna, before we came into NIJ today, said that the MIT Poverty Lab has just put out another study that found in the Dominican Republic, you can increase kids' schooling there at something like 50 cents per extra kid-year of school just by telling kids more accurate information about the returns to schooling, and so zero is the lower bound. And I think the hope is that the more studies we do — you know, the goal is to converge towards some sort of underlying minimum — true minimum cost for achieving these goals that we want. So, in five years, we'll come back, but we're not going to be done.

Laub: See you in 2016.

So please join me in thanking our guests.

[Applause.]

View other Research for the Real World seminars.

Date Modified: April 29, 2011