The Evidence Chapter: What is it? And, what does the FY2020 version mean?

The Forum for Youth Investment (Forum) is excited to release a new blog series focused on new perspectives on the use of evidence in policymaking. The blog series will expand on a number of findings from the Forum’s 2017 Managing for Success report as well as key themes from the Forum’s events and publications on evidence-based policymaking since the release of this report.

Release of the President’s annual budget proposal may focus on sound bites and fact sheets, but readers seeking more policy and historical context can look to the budget’s lesser-known Analytical Perspectives – or “AP”– volume. Its dedicated chapters “highlight specified subject areas or provide other significant presentations of budget data that place the budget in perspective.” They range from big-picture topics like borrowing and debt to specific areas like — in recent years — evidence-based policymaking.

The FY 2020 AP chapter “Building and Using Evidence to Improve Government Effectiveness” can help policymakers, researchers, and service providers understand the federal government approach and priorities for using evidence in policymaking. It focuses on four key areas:

1. Evidence-building strategies to learn and improve,
2. Evaluation as a tool to learn and improve,
3. Harnessing data for learning and improvement, and
4. Promoting transparency and accountability in federal evidence-building.

These four areas demonstrate how the federal government is moving forward on a number of key ideas found in the Forum’s recent work.

Evidence-building Strategies to Learn and Improve

This section highlights three interconnected strategies that agencies should employ and gives examples of where agencies are already using them. These strategies include requirements mandated by the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) and recommendations from the Commission on Evidence-Based Policymaking’s (Commission) final report and the Forum’s Managing for Success report.

  • Designating a chief evaluation officer. Chief evaluation officers help strengthen agency capacity to build evidence through evaluation and other means by elevating leadership and coordination of evaluation as well as supporting program offices in emerging techniques and best practices. Both the Forum and the Commission’s reports highlighted the importance of chief evaluation officers and recommended that agencies create this position to match the leadership positions that are often found in performance management or statistics.
  • Developing and using multi-year learning agendas. Chief evaluation officers will also play an important role in implementing the Evidence Act’s requirement for multi-year learning agendas. Agencies use learning agendas to plan and prioritize research on both internal operational questions and also strategic questions about meeting their mission through programs, policies, and regulations.
  • Leveraging partners. Learning agendas are a key mechanism for building connections within the agency as well as with external partners in the academic and private sectors. The chapter encourages agencies to partner with academic institutions, grantees, and other federal agencies to increase their capacity to build and use evidence in their fields.

Evaluation as a Tool to Learn and Improve

The second section discusses how evaluation can “promote efficient and effective use of taxpayer dollars,” including informing what to fund and how to improve some programs. The chapter highlights several strategies and agency examples of effective use of evaluation:

  • Investing in evaluation. Agencies can strengthen their use of evidence for program management by establishing funding set asides for evaluation activities. Both the Commission and the Forum recommended that agencies block off one percent of funding allocations for evaluation purposes. The chapter highlights existing set-asides that Congress has already authorized, new or continued proposals for set-asides including certain programs related to higher education and the Department of Justice, and proposals to help certain agencies use their evaluation funds effectively.
  • Learning from evaluation. Evaluation should be more than a “thumbs up, thumbs down” exercise where programs are only seen as successes or failures. Instead, they can be a learning experience where agencies can understand what works and how to improve their programs. The section highlights how different agencies are currently using evaluation to improve their programs and understand which parts of a program really make a difference. The Forum has previously profiled other examples of using evidence for improvement in three recent case studies.
  • Evaluation and performance management. Intra-agency partnerships can be quite fruitful as well. The Forum has previously recommended that agencies find more ways to integrate multiple types of evidence into their decision-making processes. The section offers suggestions for how evaluation and performance management approaches can support each other including using performance management to identify evaluation priorities, ensuring that evaluation priorities are adequately tracked by performance management indicators, and using outliers in performance management data to spur new evaluations.

Harnessing Data for Learning and Improvement

This section highlights efforts related to leveraging data as a strategic asset, particularly as it relates to the President’s Management Agenda.

  • Federal data strategy. The federal government is currently working to create a Federal Data Strategy which will “define principles, practices, and an action plan to support a consistent approach to Federal data stewardship, use, and access.” The strategy will incorporate elements of the Evidence Act once it is released.
  • Addressing statutory barriers to data access. This section explains the budget proposal to expand access to valuable datasets (such as the National Directory of New Hires). Federal agencies that conduct research, statistical activities, evaluation, and performance management could greatly benefit from increased access to federal datasets that are currently inaccessible due to statutes.

Promoting Transparency and Accountability in Federal Evidence-Building

The final section in the chapter notes the importance of transparency and accountability when building or using evidence.

  • Transparency. Many agencies have already produced formal evaluation policies which emphasize the need to respect the principles of privacy, rigor, and transparency and can guide all of any agency’s evaluation activities. The Evidence Act has recently required chief evaluation officers to establish and implement evaluation policies for major agencies. This mirrors recommendations from the Forum’s Managing for Success report as well which highlighted the evaluation policies of the Department of Labor and the Administration for Children and Families within the Department of Health and Human Services.
  • Accountability. Learning agendas should be publicized by agencies to the public in order to encourage external stakeholders to contribute and ensure that outside partners hold agencies accountable for answering these questions. Agencies can take transparency one step further by sharing data whenever possible to allow for additional analyses by external stakeholders.

Conclusion

The strategies, proposals, and agency examples in the AP chapter showcase both fantastic work already happening to incorporate evidence into decision-making processes and how far the federal government still has to go. The priorities demonstrate how agencies can continue to make significant progress using their existing authorities—and also that Congress maintains an important role in providing agencies the authorities, funding, and oversight needed to support meaningful progress.