Notes |
By Ron Haskins
Although there are numerous signs at both the federal and state levels that evidence-based policy is influencing policymakers, a lurking danger in any movement is that it will fall short of its potential unless it continues to expand. Concern that the evidence-based policy movement is merely a flash in the pan should be at least somewhat assuaged by the breadth the movement has already achieved.
Perhaps the most far-reaching element of the movement are the six evidence-based initiatives started by the Obama administration, which are now supporting well over 1,400 programs at the local level in teen pregnancy prevention, home visiting, preschool and k-12 education, community-based programs, and employment and training. The Teen Pregnancy Prevention Initiative and the Maternal, Infant and Early Childhood Home Visiting Program are especially notable. These two initiatives demonstrate several of the characteristics that seem certain to become classic features of evidence-based policy. Notably, both illustrate the advantages of tiered funding initiatives in which most of the federal grant funds (75 percent in both cases) go toward programs with strong evidence of producing impacts on important outcomes, while smaller grants go toward promising and innovative programs that are comparatively untested. This arrangement ensures that most funds are spent on programs that maximize the chances of producing impacts, while simultaneously providing adequate funds to develop new programs that may prove their worth in the future. The pipeline to innovation must remain open because new and more effective programs are always needed.
But the distinct approaches that are sustaining the evidence-based movement do not end with the Obama initiatives. Joining the parade are the reforms of Head Start, based in large part on observational measures of teacher performance in the classroom; the Pew-MacArthur Results First initiative, in which 19 states have agreed to review the evidence on which their social programs rest and, based on this assessment, make recommendations to their respective state legislatures about shifting funds to programs that work; the recent focus on Pay for Success programs (also called “Social Impact Bonds); and others.
To this impressive list we can now add the recent work by the White House’s Social and Behavioral Sciences Team (SBST). Founded just last year, and under the leadership of wunderkind Maya Shankar, the team has just published the results of 17 studies, all but one based on random-assignment designs, to influence people’s choices or improve government efficiency. Based in large part on the work of Richard Thaler of the University of Chicago and Cass Sunstein of Harvard, SBST is teaching government to “nudge” people into making better decisions.
The team is using the results of behavioral research which, roughly speaking, shows subtle ways that various messages can influence what people do. If people are reminded of obligations in timely fashion, they are more likely to fulfill those obligations. When filling out a form, if people sign at the beginning stating they will provide honest and accurate information in their answers, they are likely to give more accurate answers than if they sign at the end when they have already provided their answers. When employees specifying withholdings are defaulted to depositing withheld funds in a retirement savings account or taking an action to opt out, rather than having to take an action to opt in, the enrollment rate will increase and people will save more for retirement.
There are several reasons the SBST behavioral science initiative is such a stellar entry into the flooding tributaries of the evidence-based movement. First, the energy of the evidence-based movement must be rekindled by new initiatives that strengthen the claim that evidence-based policy can improve program impacts. Second, the findings from all but one of the administration’s studies are based on random assignment designs, the gold standard of program evaluation. It follows that their results are likely to be reliable and replicable. Third, all the studies have immediate application to government policy. We learn how to write letters that increase compliance, ways to increase contributions to retirement plans, how to increase the likelihood that government workers will increase use of two-sided copying, how to increase college entry, how to increase the call-in rate for programs that need information from account holders, and many other lessons that improve individual choices and promote government efficiency. Fourth, it seems reasonable to conclude from the entirety of the SBST’s results that if scaled up, their behavioral innovations would save the government billions of dollars. A more subtle and difficult to measure outcome of scaling up the SBST discoveries is the effects on health and on well-being in old age. It may be difficult to measure the benefits of millions of Americans saving more for retirement, but who doubts that a savings account is a key to peace of mind and greater choices in spending by retirees who have more in their savings account, not to mention the indirect benefits generated from less dependence of retirees on their children and other relatives.
The field of application of nudges based on behavioral science is limited only by the imagination of program designers and government officials. Equally important, the early success of applying behavioral principles to government policy illustrates yet again the soundness of basing policy choice on evidence of outcomes. Evidence-based policy is expanding its reach, this time by showing new ways to influence behavior, improve the efficiency of government programs, and save money. Evidence-based policy is on a roll.
In “Evidence at the Crossroads,” we seek to provoke discussion and debate about the state of evidence use in policy, specifically federal efforts to build and use evidence of What Works. We start with the premise that research evidence can improve public policies and programs, but fulfilling that potential will require honest assessments of current initiatives, coming to terms with outsize expectations, and learning ways to improve social interventions and public systems. Read other posts in the series, and join the conversation by tweeting #EvidenceCrossroads. |