Federal
agencies, including those that provide funding for children and family
initiatives, are being encouraged to find ways of using scientific evidence to measure program effectiveness and inform their budgeting decisions.
The push comes from a recent federal
Office of Management and Budget (OMB) memo that directs federal departments and
agencies to show in their fiscal 2014 budget submissions how to use evidence
and evaluation, including their most innovative ways of doing so.
The directive stops short of requiring
federal agencies to fund only programs and initiatives that evidence demonstrates
are effective. Instead, it appears to be a step toward building the capacity of
agencies to gather and evaluate evidence to improve program performance and guide
the allocation of funds at a time of mounting pressure to tighten spending.
"There is recognition here that
implementing evidence-based reforms is something the government is just learning
how to do," said Jon Baron, president of the Washington, DC-based Coalition for
Evidence-Based Policy, a national advocacy organization. “To an extent, they are
inventing it as they go along and are encouraging federal agencies to play a
role in developing new evidence-based approaches.”
Evidence In Budgeting
Evidence-based
government is not a new concept. Federal administrations going back to the
1960s have from time to time promoted the notion of funding programs based on
evidence of their effectiveness.
There are plenty of examples of
such an approach at the program level. Among the most recent is the Obama
Administration’s use of evidence-based models to assess and fund a number of federal
programs, including home visitation and teen pregnancy prevention. The programs
are evaluated using scientific methods and the largest share of funding is awarded
to those that evidence shows produce the best outcomes.
But overall, the use evidence to
evaluate effectiveness and guide budget allocations has yet to become part of
the DNA of governments and agencies across the country, whether federal, state
or local.
Evidence-based
budgeting is not without controversy, although the principle of supporting
programs with proven track records of success and identifying ways to make
programs more effective attracts little debate. Otherwise, there is a risk that
scarce dollars will be spent on programs that do little to improve the outcomes
of children and families. A child in an ineffective early learning program, for
example, is less likely to be ready for school; a parent who gets a voucher for
job training in an ineffective program is less likely to exit poverty.
“Whether we
look at it as advocates of quality social services in general or as keepers of
the public purse, to expect that the programs we do advocate for and fund can
demonstrate positive outcomes seems like common sense to me,” said Raymond
Firth, director of the University of Pittsburgh Office of Child Development
(OCD) Division of Policy Initiatives. “I wouldn’t go to a doctor if I didn’t
believe he was using practices that had evidence that they were effective.”
More
effective use of government dollars may be the most obvious of the potential
benefits, but there are others. Evaluation rigorous enough to identify aspects
of programs that work well and those that do not can lead to improvement and
innovations in the nature of future programs.
“A
secondary benefit is that it raises the status of evidence in public and agency
discourse,” said OCD Co-Director Robert McCall, Ph.D. “And that is certainly
welcome, because historically politics always trumps evidence in policy
formation.”
Evidence-based
budgeting also raises some questions about how such policies are carried out. One
issue is that not all research is equal. Some methods of evaluation yield more
credible findings than others. Some are more vulnerable to manipulation and
spin than others. And it often takes a high level of sophistication to
understand the differences.
An emphasis on outcomes also has
potential drawbacks. “An emphasis on outcomes can sometimes give short shrift to
implementation,” said McCall. “Your first outcome is whether the program was
properly implemented. Did the service providers provide the service in the
manner, nature and extent that we think should be effective? You shouldn’t be
evaluating other outcomes until you determine whether the program was properly
implemented.
“Policymakers and funders often
want you to evaluate a program in its first cohort. To the extent the program
is new and innovative, it may take two or three cohorts of people before the service
providers get the implementation down. It takes a while to learn how to do
that.”
Rigorous
evaluation can identify aspects of programs that work well, as well as those
that don’t, and offer insight into why. Such evidence tends to spur innovation,
as long as research that leads to creative new solutions is adequately
supported. A question regarding wider application of evidence-based policy is
whether the heightened emphasis on demonstrating outcomes will affect support
for research that looks beyond what is the tried and true today in search of innovations
that will lead to better outcomes tomorrow.
The New Federal
Approach
The recent
OMB memo directs federal agencies to show how they use evidence in their fiscal
2014 budget submissions and includes a separate section describing their most
innovative approaches. The motivation for such steps was explained by acting
OMB director Jeffrey Zients, who wrote:
“Since taking office, the President
has emphad the need to use evidence and rigorous evaluation in budget,
management and policy decisions to make government work effectively. This need
has only grown in the current fiscal environment. Where evidence is strong, we
should act on it. Where evidence is suggestive, we should consider
it. Where evidence is weak, we should build the knowledge to support
better decisions in the future.”
The
directive appears less about using evidence to decide which programs to fund than
it is about prodding agencies to use evaluation more widely to gather evidence.
It invites
agencies, for example, to propose new evaluations and suggests that resources
will be available for initiatives expanding their use of evidence. Examples of
such initiatives include low-cost evaluations using existing administrative
data, expanding evaluations within existing programs and employing evaluations
linked to performance partnerships that blend “multiple funding streams to test
better ways to align services and improve outcomes.” \
OMB also
directs federal agencies to show that, between fiscal years 2013 and 2014, they
are increasing the use of evidence in formula and competitive programs. The
directive offers several approaches for agencies to consider. The use of
evidence-based grants is one.
The tiered frameworks used by several
agencies, including the Department of Education, is an example of such an
evidence-based grant strategy. Under that approach, the programs that
demonstrate through rigorous evaluation stronger evidence of effectiveness are
eligible for more funding. Other grants are available to programs with moderate
outcomes or evidence. And some tiered initiatives provide funding for
development of new models that don’t yet have strong evidence of effectiveness,
but include an evaluation component, to promote innovation.
“Unlike earlier OMB initiatives,
this is evidence and evaluation designed to foster program improvement rather
than determining whether the program is working or not working and should be
cut or get increased funding,” Baron said. “It’s not a thumbs up or thumbs down
on the whole program. That’s not the goal of this evidence-building exercise.
It’s trying to identify within programs which strategies or models are
effective, using rigorous evaluation. It’s the kind of evidence that, if it
shows effectiveness, can be disseminated throughout the whole program and used
to improve its overall performance.”
For more details, see
the May 18, 2012 OMB memorandum, which can be found online at: