This interview was originally completed in March 2016.
Sara Heller is an Assistant Professor of Criminology at University of Pennsylvania.
What got you interested in criminology, and particularly in cognitive behavioral therapy?
I’m not really a criminologist in the traditional sense—I started studying policy because I wanted to help figure out how to improve life outcomes for disadvantaged youth, especially those living in cities in the U.S. I actually started with education policy, because I thought that, unlike some other areas of social policy having to do with poverty or families, pretty much everyone agreed that the government had a role in providing education. But as I learned more about the challenges facing urban youth, I realized just how prevalent involvement with the criminal justice system is (one estimate suggests that 1 in 3 black men will spend time in prison during their lifetime). And I decided it makes very little sense to study problems like education in isolation. Youth are facing a series of interrelated choices about school, crime, work, and family; their choices in one domain are likely to affect everything else. So I do study crime, but it comes from a broader interest in how policy can improve a wide range of outcomes for youth.
The CBT [cognitive behavioral therapy] interest involved a lot of luck—I was a graduate student with the University of Chicago Crime Lab in its early days, and the intervention that won their first design competition was CBT-based. It ended up being a fortuitously good fit with my interests in education, psychology (my undergrad major), crime, and rigorous causal inference.
What is one current research project that you’re particularly excited about?
I’m going to cheat and talk about a set of projects. One of the often-criticized aspects of RCTs is their black-box nature; you test a bundle of things together, and you don’t know which parts matter or whether it could work in another setting. One solution is to do a series of RCTs that build on each other. And I’m doing that with my summer jobs work in Chicago and now in Philly: across multiple studies in multiple years, we’re experimentally varying different parts of the program, starting to incorporate survey work to measure mechanisms, measuring implementation heterogeneity across providers as programs grow, and taking the tests to different programs in different cities to assess external validity.
What is your “dream evaluation”? (It doesn’t have to be feasible!)
Anything with a sample size of infinity. You could vary each aspect of a program separately to do a great job of isolating mechanisms, testing heterogeneity, measuring spillovers, and all the other questions to which my answer is always “if I had the power I’d have…”
What is your most memorable story from the field?
I was talking to a group of boys in Chicago’s summer jobs program, and they start volunteering stories (long before I had any results) about ways the program might be working. One told a story about how proud he was when he told his friends that he couldn’t go out late at night because he had to get up for his job. They talked about being role models for the younger kids around them, having adult mentors who opened their eyes to a new possible future, seeing new parts of the city, earning a paycheck for the first time, and having a peer group where it was safe to talk about some of the genuinely terrible things that had happened to them, which helped them let go of their obsessive worry over it. Overall, they were incredibly articulate about what they were learning in the program, how much they appreciated the chance the City was giving them, and how they saw their own lives changing as a result. It was a really moving reminder of why we all do the work we do.