Partnering with J-PAL North America: A Practitioner Perspective

Partner Testimonials_Flickr Creative Commons_© visitBerlin_Foto_Dirk Mathesius.jpg

Benefits Data Trust (BDT), a nonprofit partner, is collaborating with J-PAL North America to identify effective outreach strategies to enroll low-income households into benefits. We asked Rachel Cahill, Director of Policy at BDT, a few questions about her experience partnering with J-PAL North America to design an evaluation that will answer important questions about BDT’s work.

What made you decide to partner with J-PAL North America?

I think there is a benefit to the partnership. We were doing this work and we already had another evaluation although not as an RCT, and we were confident that our program worked to help low-income households apply for benefits. Really we were seeking to answer a related question: is the intervention that we already believe is working on SNAP takeup—is it having an impact on health outcomes?

We got connected through the Camden Coalition, and began working with Amy [Finkelstein] and Matt [Notowidigo], discussing what we already knew, to eventually arrive at the first order research question in this evaluation on the effect of our program on SNAP take-up.

We didn’t begin with this particular research question. It was sort of a negotiation—Matt and Amy found that there really isn’t a lot on the effect of outreach and application assistance on benefits enrollment. We bought into that approach.

It was much more of a partnership to decide on the research question.

Can you discuss one or two examples of challenges in delivering your intervention in context of a randomized evaluation and how you worked with J-PAL to overcome them?

There were various challenges. One of the nuances of the program is that we are using data that belongs to Pennsylvania Department of Health and Human Services. We had to broker a three-way agreement between BDT, MIT, and DHS, which was a very large barrier to overcome.

The current Data Use Agreement allowed us to use DHS data for outreach but not for research. Ultimately we did overcome that barrier, but it did delay the launch of the evaluation.

Doing an RCT with an entitlement program, we had to develop a design that was really just a wait list control. It would be much more straightforward to get to BDT’s core question of identifying the effect of the SNAP program by randomly assigning some people to receive the SNAP program and randomly assigning others not to receive the program. However, people who are eligible cannot be denied the program because it is an entitlement.

Instead, we had to think creatively to develop a high-intensity, low-intensity, and control group in an encouragement design. We had to negotiate with MIT because we were seeking simplicity in the design. Initially J-PAL proposed a dozen different types of outreach letters, to explore the effect of many different variations on the outreach strategy. BDT explained we can manage a lot of nuance, but there was a limit at which the ability for us to manage different treatment arms would decline.

There was a tradeoff: Do you keep the design simpler or do you have a dozen treatment arms and increase the probability that you make a mistake?

What lessons or insights about your program have emerged from this partnership and this evaluation?

We have learned a lot about doing research. I joked with Amy that I estimated I would spend 20 percent of my time on this project. She asked how I was going to spend 20 percent of my time on this project, and I said to Amy that it takes 20 percent of my time to answer her questions!

To do it right, which J-PAL does, it does take a lot of thought and planning. It really takes multiple people with different types of expertise. This requires a big resource commitment from the organization.

We also have to use some political capital with our state partners. There’s an opportunity cost there: you have a limited number of requests that you can make in a given time. Looking back on things now, I still would have done the evaluation, but I would have allocated more time and more resources.

For example, we thought BDT would be involved for the first 12 months, but we didn’t budget time for the data analysis because we figured that would be MIT’s role. We realized that the engagement doesn’t end when the 12 months ends, and that we still have to follow up concerning things like data collection.

This experience will just make us smarter in doing future research.

There’s been tremendous value in terms of learning on our program.

We very quickly saw that the marketing letter formatted in a particular way for the evaluation generated an increased response rate of a half percentage point, which may seem small but is a lot for our field of work. This was significant enough to us, not even to wait until the end of the RCT; we thought about how to incorporate this letter design in other states beside Pennsylvania immediately.

The discipline of setting up an RCT was also helpful.

How will your organization use the knowledge generated by the evaluation?

Our hypothesis is that the high-touch intervention will increase take-up more than the light touch intervention and certainly more than the control group.

A common misperception in this field of social services is that just sending out a letter will be enough to increase take-up. This is founded on the premise that low take-up is just an issue of awareness, but we know that the enrollment process takes a lot more than awareness raising. Many of the people we talk to know that SNAP exists—they just can’t imagine going through all the paperwork and enrollment procedures to access the benefits they are eligible for. This is especially the case for the SNAP program, which has one of the more archaic enrollment processes.

The real game changer, is whether we can demonstrate long-term outcomes on health. Amy and Matt believe that we are not powered enough to detect these effects in this particular evaluation.

People are really thinking about investments in social services to decrease healthcare costs. I don’t want to put it bluntly, but that’s where the money is—healthcare is an area where the government is spending so much money. We think this is a really significant opportunity to generate definitive evidence on whether social services can prevent future health costs rather than just having a hypothesis that social services might be helpful.

That’s really what we’re striving for, and we’re willing to go down the rabbit hole to figure that out.

I really like working with the MIT team. I would explain to any other nonprofit considering doing a randomized evaluation what a big deal an RCT is. It’s still a big challenge to get a nonprofit to go down that rabbit hole to answer really tough questions. With a big emphasis on rapid testing, people often don’t want to wait several years to see longer term outcomes. The staff alone in nonprofits often transition with 3 years, so it can be hard to even have the same people working for the duration of the evaluation.

I don’t say this as a critique but just as advice to J-PAL folks–thinking about partners other than government—building a level of transparency on the level of commitment that is required going into an evaluation. I was not aware of how much this would entail when BDT signed on.

Read partner testimonials from BDT and from the South Carolina Department of Health and Human Services here.

Innovative Health Care Delivery: An interview with Dr. Monteic A. Sizer

Located in northeastern Louisiana, the Northeast Delta Human Services Authority (NEDHSA) provides integrated care services for people with mental illness, substance use disorders and developmental disabilities. We selected NEDHSA as a partner through our inaugural Health Care Delivery Innovation Competition. In the interview below, Dr. Monteic A. Sizer, NEDHSA’s Executive Director, shares his take on innovation in health care and what it’s like to partner with J-PAL North America.

2_dr-sizer-home
Dr. Monteic A. Sizer
Photo credit: NEDHSA

Tell us about what makes your approach to health care delivery innovative. 

Northeast Delta Human Services Authority (NE Delta HSA) works to transform and positively connect individuals, communities and social systems through an innovative approach to health care delivery. This approach is focused on effecting positive change to the behavioral health care services traditionally offered by a governmental agency. We catalyze solutions for one of our nation’s most underserved regions. Our deep engagement through trusted relationships with diverse members of the communities we serve is significantly innovative and helps provide a meaningful connection toward a more fulfilling, connected life for our citizens. Additionally, the partnerships we have built offer greater access to an integrated treatment network from wherever a person enters the health care system, whether their journey begins through our behavioral health clinics or through passionate primary care partners to whom we can refer citizens for problems beyond traditional behavioral health issues.

Many of our mental health and substance abuse clients have never visited a dentist or a primary care physician. We are equally as interested in a client’s body and spirit as in their mental health. Our integrated, collaborative model provides greater accountability and improved responsiveness that differentiates us from other governmental entities.

What do you see as the biggest challenge to health care delivery in your area? 

The biggest challenges to health care delivery include longstanding poverty, lack of education, a lack of employment opportunities and high crime within our region. Barriers like basic transportation, too few medical providers, and a lack of inpatient and long-term psychiatric care amplify these challenges. We are also working to improve engagement among traditionally governmental agencies that worked in siloes, another barrier that prevents efficient use of resources. We are dedicated to improving our citizens’ access to behavioral and primary health care and ensure that competent, quality care with excellent customer service is available to them.

How does your work help break the cycle of poverty? 

In addition to our work to improve citizens’ access to quality behavioral and primary health care, we are actively working to address and catalyze improvements in regional, social health determinants. We are passionate about positively influencing quality of life factors such as affordable housing, care for people in crisis who have a mental health or substance use issue, improved employment opportunities, a better quality education, and providing faith-based mental health outreach. We believe that addressing these social health determinants will help break the cycle of poverty across longstanding issues of race, class, segregation, religion and stigma. We work to catalyze collaboration and efficiency among governmental agencies, which will also be an important factor in helping reduce poverty and disjointed connections among the services that our regional agencies provide.

Why are you interested in randomized evaluations?

NE Delta HSA understands that a randomized evaluation is an unbiased, objective way to measure programmatic results. We hope to utilize this method to measure how our services are meeting our citizens’ needs and find further improvements for even better health outcomes for our region and our state.

Why did you decide to partner with J-PAL? 

We place great value on collaboration and we believe that knowledge-sharing will help build a stronger and healthier population, help improve the effectiveness of our services, help us get the best use of our existing resources, and provide potential replication that can improve peoples’ lives far beyond on our geographic region. We respect that J-PAL is comprised of highly dedicated researchers and professionals who are truly dedicated to poverty reduction supported by scientific evidence. We are thrilled to work with J-PAL and have access to the team’s expertise.

Featured Affiliate Interview: Sara Heller

This interview was originally completed in March 2016.

Sara Heller is an Assistant Professor of Criminology at University of Pennsylvania.

What got you interested in criminology, and particularly in cognitive behavioral therapy?
I’m not really a criminologist in the traditional sense—I started studying policy because I wanted to help figure out how to improve life outcomes for disadvantaged youth, especially those living in cities in the U.S. I actually started with education policy, because I thought that, unlike some other areas of social policy having to do with poverty or families, pretty much everyone agreed that the government had a role in providing education. But as I learned more about the challenges facing urban youth, I realized just how prevalent involvement with the criminal justice system is (one estimate suggests that 1 in 3 black men will spend time in prison during their lifetime). And I decided it makes very little sense to study problems like education in isolation. Youth are facing a series of interrelated choices about school, crime, work, and family; their choices in one domain are likely to affect everything else. So I do study crime, but it comes from a broader interest in how policy can improve a wide range of outcomes for youth.

The CBT [cognitive behavioral therapy] interest involved a lot of luck—I was a graduate student with the University of Chicago Crime Lab in its early days, and the intervention that won their first design competition was CBT-based. It ended up being a fortuitously good fit with my interests in education, psychology (my undergrad major), crime, and rigorous causal inference.

What is one current research project that you’re particularly excited about?
I’m going to cheat and talk about a set of projects. One of the often-criticized aspects of RCTs is their black-box nature; you test a bundle of things together, and you don’t know which parts matter or whether it could work in another setting. One solution is to do a series of RCTs that build on each other. And I’m doing that with my summer jobs work in Chicago and now in Philly: across multiple studies in multiple years, we’re experimentally varying different parts of the program, starting to incorporate survey work to measure mechanisms, measuring implementation heterogeneity across providers as programs grow, and taking the tests to different programs in different cities to assess external validity.

What is your “dream evaluation”? (It doesn’t have to be feasible!)
Anything with a sample size of infinity. You could vary each aspect of a program separately to do a great job of isolating mechanisms, testing heterogeneity, measuring spillovers, and all the other questions to which my answer is always “if I had the power I’d have…”

What is your most memorable story from the field?
I was talking to a group of boys in Chicago’s summer jobs program, and they start volunteering stories (long before I had any results) about ways the program might be working. One told a story about how proud he was when he told his friends that he couldn’t go out late at night because he had to get up for his job. They talked about being role models for the younger kids around them, having adult mentors who opened their eyes to a new possible future, seeing new parts of the city, earning a paycheck for the first time, and having a peer group where it was safe to talk about some of the genuinely terrible things that had happened to them, which helped them let go of their obsessive worry over it. Overall, they were incredibly articulate about what they were learning in the program, how much they appreciated the chance the City was giving them, and how they saw their own lives changing as a result. It was a really moving reminder of why we all do the work we do.

See Sara’s bio and evaluations.

Featured Affiliate Interview: Enrique Seira

This interview was originally completed November 2014.

Enrique Seira is an Industrial Organization and Development Economist at the Instituto Tecnológico Autónomo de México.

enrique_seiraWhat got you interested in development economics?
I was born and raised in northern Mexico, and as a kid we frequently went to the United States. The difference in income across countries was striking even to a small child. From that age I always wondered why is it that the US is rich and Mexico is poor. Many years later I realized that this disparity was present for dozens of counties in the world and that this was a centuries-old question. So for me it is very personal. My interest in development economics arose from both: my desire to help spur development of my country and an intellectual curiosity to understand why some countries and people are poor.

Development economics is an exciting field that in the last two decades has undergone large changes, in terms of methodologies—like the use of randomized control trials, access to data, and of increased interest by the international community on the topics it studies. Unfortunately we still know too little of what works to further development, and it is still true that many policymakers show little interest in learning what works. We have to change this and I believe J-PAL is playing a very important role. Consistent with J-PAL’s mission and in order to disseminate knowledge and engage Mexican academics and policymakers, I helped create “Qué Funciona para el Desarrollo,” a non-profit organization that conducts and disseminates rigorous research investigating what works for development.

What is one current research project that you’re particularly excited about?
I am excited about many projects, but let me mention two of them. The first is about finding ways to motivate high school students to flourish both in academic terms and in their broader personal lives by helping them develop socio-emotional skills (SES). Teenagers tend to engage in risky behaviors and frequently need guidance and examples in developing their life plan. Many of them seem to lack aspirations and role models. I am currently helping the Mexican Ministry of Education to conduct a randomized trial to evaluate “Construye T”, a program aiming to develop SES in public high school students.

A second research agenda I am pursuing is about an old question of whether finance causes growth. Mexico has a low credit-to-GDP ratio even within Latin America (about 27 percent) and also has had meager growth in the last three decades. Are these two facts causally related? The particular question I want to answer is this: To what extent are medium-sized firms in Mexico constrained in their productivity and growth by lack of credit? What (if anything) is preventing the credit market from functioning efficiently? I am currently engaging Mexican banks to set up an empirical research strategy that helps us answer these important questions.

What is your “dream evaluation”? (It doesn’t have to be feasible!)
One of the questions I am interested in overlaps to some extent with the fields of political economics and political philosophy, and is about what roles government legitimacy and social trust play in engendering social cooperation and obeying the law. Economic transactions almost always involve an element of trust between agents. Successful public policies often require cooperation by citizens. Tax evasion and corruption for instance may be a function of the perceived government legitimacy. What determines the legitimacy of social arrangements? How is trust formed and to what extent is economic activity influenced by it? Ideally I would like to exogenously vary the legitimacy of government and measure how this influences compliance with the law.

What is your craziest story from the field?
Instead of describing a funny anecdote let me share a revealing story from the field. One of the projects I am working on along with coauthors is about understanding why productivity of small farmers is so low. One of the experiments involved partnering with a government ministry. The only role of the ministry was to deliver fertilizer by March or April, which they agreed to do. It is now October and they still haven’t delivered it! This is disastrous for yields.

This is not uncommon in government fertilizer programs. Why? Many bureaucrats face few incentives to perform well. I believe this highlights the importance of accountability and the power of market competition. This may also explain why there are fewer randomized controls trials with the government as an implementing partner. I am convinced that successful development policy has to carefully think through what are the incentives of politicians and bureaucrats. Pure knowledge of what works for development is necessary but not sufficient.

See Enrique’s bio and evaluations.

Featured Affiliate Interview: Amanda Pallais

This interview was originally completed in September 2014.

Amanda Pallais is an Assistant Professor in the Economics Department at Harvard University and a Faculty Research Fellow at the National Bureau of Economic Research.

amanda_pallaisWhat got you interested in economics and labor economics in particular?
In high school I had the chance to work closely with inner-city kids and become attuned to issues of poverty. One of the best ways to reduce poverty at the micro level is to help prepare people for employment and one aspect of that is to improve educational outcomes. Economics is a great way to help accomplish this. It provides a toolkit to identify the causes of problems and to test potential solutions.

What is one current research project that you’re particularly excited about?
I’m particularly interested in why low-income students who are prepared for college do not go to college at the same rates as do higher-income students. Can reducing the cost of college induce low-income students to attend? I’m working on a project that provides scholarships to thousands of randomly selected students. We are just starting to see the results of the scholarships and they are startling; they appear to be even more effective than we hypothesized.

What is your “dream evaluation”? (It doesn’t have to be feasible!)
I want to know how technology can improve education. We have the technology to connect students with the best teachers and professors, regardless of their location. And we can provide students with opportunities such as tutoring or advanced classes unavailable to them at their own schools. Although there is a lot of speculation in this area, we really don’t know what works. The key is learning whether and how such approaches will actually help students learn more.

What has been your craziest experience implementing a research project?
As part of my dissertation, I hired about a thousand workers from around the world for short data-entry jobs. The research was focused on providing jobs for people who lacked work history to determine what effect it had on their subsequent labor-market outcomes. I was surprised by the reactions of many of those I hired. The workers were so thankful to have received their first jobs. I received hundreds of thank-you’s, invitations to join social networking sites from all around the world — and pictures of pets.

See Amanda’s bio and evaluations.

Featured Affiliate Interview: Philip Oreopoulos

This interview was originally completed February 2015.

Philip Oreopoulos is a Professor of Economics and Public Policy at the University of Toronto.

phil_oreopoulos

What got you interested in economics and the economics of education in particular?
In high school I thought I wanted a career in business, but then after taking an economics class I realized what I really wanted was to understand how the world of business worked, why some ended up happy and successful and others ended up homeless. Economics provides an appealing approach to understanding how best to deal with constraints (be they financial, time, or otherwise) and a wide set of tools for trying to make things better.

My training at Berkeley was in labor economics, with a focus on empirical methods for generating convincing causal inference. At that time, the field of behavioral economics was just starting to get off the ground, with research on procrastination and hyperbolic discounting, applied mostly to savings and finance. I started a project looking at the causal effects of compulsory schooling on wealth and happiness, and it dawned on me that the typical investment model of schooling could not easily explain high returns from compelling students to stay in school who otherwise would have left earlier. Behavioral models that incorporate adolescents’ lack of maturity or ability to ‘think things through’ I think are much better suited at explaining dropout behavior.  I’m very interested now in students’ own contribution to their schooling production function and what that production function actually looks like. We know surprisingly little around these topics, but that is starting to change.

What is one current research project that you’re particularly excited about?
I recently finished an experiment in which high school seniors at low-transition schools were guided through the college application process in class, for free. The goal was to help every Grade 12 student exit with an offer of acceptance from a college program they helped pick and a financial aid package, regardless of how certain they were of whether they wanted to go. Many students at these schools don’t receive encouragement to go to college and must initiate much time and focus if they want to complete all the application steps on their own. By incorporating the application process into the curriculum for everyone, students less sure about college can discover the variety of programs available to them. The option to go becomes much more real. The program, randomized at the school level, produced an increase in college going of 5 percentage points among the entire Grade 12 class – an increase of about 12 students per school.

What is your “dream evaluation”? (It doesn’t have to be feasible!)
I’d like to be able to evaluate long term effects for students at the margin of going to college. My interest and others in encouraging more youth to go to college (including possibly two-year vocational programs) is based on past research that is either outdated or not all that good. There’s room for improvement and I’m hoping that an experiment will come along that has a large enough treatment effect for encouraging youth to well-matched programs who otherwise would not have gone in order to evaluate intermediate and long-term impacts on skills, finances, health, and wellbeing.

I can’t resist also suggesting to systematically evaluate how different parenting methods affect children in the long term. There is not a lot of good evidence-based advice for parents on how they could be spending their time and money to foster children’s patience, grit, compassion, etc…  What activities are best for my kids? How should I manage screen time? How should I react to misbehavior? What’s the best way to discipline misbehavior? Do the answers to these questions depend on gender or social-economic circumstance? It’s would be very difficult to implement field experiments that change parenting styles, but since you’re asking me to dream…

What has been your craziest experience implementing a research project?
I had a bad experience once with a funder that did not like the one-year results of a two-year study. They requested changes to the program design and site location or else would withdrawal support. Funders usually partner to implement program evaluations because they think the program will work. It’s stressful when things don’t go exactly as planned, and even when they do, it’s no guarantee the program will work as expected.

See Phil Oreopoulos’ bio and evaluations.

Featured Affiliate Interview: Joseph Doyle

This interview was originally completed May 2015.

Joseph Doyle is the Erwin H. Schell Professor of Management and a Professor of Applied Economics at the MIT Sloan School of Management.

What got you interested in economics and in particular, child welfare and health?
I have been interested in public policy since high school when I discovered economists weighing in on policy analysis. The discipline always seemed like a natural fit to frame the questions and then use data to find credible answers. I then had fantastic advisors in my undergraduate work at Cornell (Dean Lillard and Elizabeth Peters) who nurtured my interest in child welfare. At the time, I found some policy debates seemed to be caught up in political ideology but there was common ground on child welfare: most agreed that children should be protected from abuse or neglect. In terms of health, I first became interested in studying the value of Medicaid as an anti-poverty program and then discovered that the fundamental questions in health, such as what types of care are productive and unproductive, offered important outcomes to study (patient health) and plenty of work to do.

What is one current research project that you’re particularly excited about?
I am particularly excited about a project in Camden, New Jersey where other J-PAL affiliates and I teamed up with Dr. Jeff Brenner to study an innovative model that provides particularly extensive care management services to “super-utilizers” of the healthcare system. The management includes helping patients with social as well as medical needs. I am hopeful this is a strategy that can improve patients’ lives and lower healthcare costs at the same time.

What is your “dream evaluation”? (It doesn’t have to be feasible!)
I am really interested in the impact that family members have on one another, so an unfeasible (and unadvisable) evaluation would randomly assign siblings to one another and then unpack what types of siblings have the most positive or negative impacts on their within-family peers. A more feasible one would study how best to prevent child abuse and neglect, with, for example, different models of family preservation services that families investigated by child protective services might receive.

What is your craziest story from a research experience?
I have research stories, and crazy stories, but the Venn diagram is coming up blank on the intersection. For one project, I was one data permission renewal away from a publication with little progress, so I flew to Chicago and basically sat in a waiting room until the data specialist found time to meet with me; I once had a paper about a “large Midwest healthcare system” dutifully anonymized according to our data protocols only to have the new Chief of Medicine call me the same week it was published (he was more interested than annoyed, but I wasn’t sure what he would say when I called him back). Last, I had a journal article where the proofs were thoroughly checked for completeness down to each number in the tables, and I sent the article off satisfied with it. Three years later a reader told me that Tables 1 and 2 were missing from the published paper. I had two main takeaways: (1) double check papers when they come out even if the returned proofs were complete, and (2) one person read the paper in three years! That’s less than the number of authors in the paper! I am hopeful that this is a lower bound on the actual number of readers and the complete version on my website was doing its job.