+1 (208) 254-6996 [email protected]
  

MOVING TOWARD STRONGER MEASURES OF HEAD START PERFORMANCE

The recent history of education is full of examples in which policymakers decided to hold programs or individuals accountable for outcomes, mandated accountability, and then create or adopted measurement tools to comply with accountability mandates. The National Reporting System adopted by the George W. Bush administration offers one such example, as do the seven designation renewal criteria adopted following the 2007 Head Start reauthorization. In K-12 education, many states developed systems of teacher evaluation in response to similar mandates. As these examples indicate, this approach can lead to measurement tools that don’t actually measure the most important outcomes, causing unintended consequences and public backlash. A more sensible approach would be to identify the outcomes or components of program performance that are most important, develop tools to measure those outcomes or domains of program performance, pilot those tools in a small number of sites, refine tools based on pilot experience before rolling them out at scale, and give the field time to become familiar with the tools before attaching them to consequences.

Don't use plagiarized sources. Get Your Custom Essay on
MOVING TOWARD STRONGER MEASURES OF HEAD START PERFORMANCE
Just from $13/Page
Order Essay

 

 

MONEYBALL FOR HEAD START 20

How would this look in Head Start?

First, federal policymakers should define the key domains of child and family outcomes, safety, health, compliance, program quality, and grantee health and capacity that are crucial to measure in order to understand and improve the performance of Head Start grantees. This is a crucial first step toward adopting meaningful performance measures. Because decisions about which key domains to measure are not simply technical but also reflect judgments about the priorities and purposes that Head Start programs should emphasize, the process for defining these domains should engage a variety of federal policymakers, researchers, practitioners, and other stakeholders.

Once the key domains of program performance have been defined, OHS can identify the data for each of those domains that it currently collects — and stop collecting unrelated data. It is likely, however, that there will be key domains of performance in which consistent, valid, and reliable information is not currently available or collected across grantees. Where this is the case, federal officials must initiate a process, working with researchers, practitioners, and the broader field, to iteratively develop, refine, and adopt new measures of grantee performance.

Box 5: The Office of Head Start’s Efforts to Create a More Performance-Oriented Monitoring Process

The Office of Head Start is already taking steps to make its monitoring process more performance-oriented and use it to better differentiate program performance. The Head Start Key Indicators, adopted in 2014, allow grantees that meet certain criteria to be reviewed against a streamlined set of 27 compliance measures, rather than complete a full monitoring review. Grantees undergoing the streamlined review also receive Classroom Assessment Scoring System observations and a review of environmental health and safety.25 Grantees that fail the streamlined review must complete a comprehensive monitoring review, and those that pass a streamlined review in one review cycle must complete a comprehensive review in their next cycle. The Key Indicators are a positive — but only partial — step toward reducing the monitoring and compliance burden on programs with a track record of meeting standards, and it is too early to tell what effect they will have in practice.

Monitoring protocols developed for the FY2016 monitoring cycle include both compliance measures, which assess compliance with federal Head Start Program Performance Standards in key performance areas, and pilot quality measures, which are a preliminary step toward outlining a continuum of program quality to support ongoing quality improvement. The FY2016 Head Start Monitoring Protocol also calls on reviewers to meet with program directors and early childhood development coordinators to review the data that grantees collect on children’s progress toward school readiness goals, as well as grantees’ analysis of this data and any changes made in response to data. This approach stops short of holding programs accountable for the child learning outcomes they produce. But it does incorporate into the monitoring process greater attention to child outcomes, while also emphasizing programs’ capacity to collect data on children’s progress toward school readiness goals and use this data to inform continuous improvement. OHS should continue to build on this approach as an interim step until more robust program-wide measures of child and family outcomes can be adopted.

 

 

MONEYBALL FOR HEAD START 21

In some cases, relevant data on key domains of program performance may exist but not be collected consistently across grantees. Adopting a broader range of common performance measures across Head Start grantees will require developing standardized ways of collecting, reporting, and analyzing data on these performance domains. Rather than a top-down approach, however, OHS should work with the field to pilot common ways to collect and analyze data, building on work that the National Head Start Association and leading Head Start grantees have already done, in order to select feasible approaches that generate the most useful information for both grantees and OHS.

Where valid and reliable measurement tools do not exist, federal research agencies must work with researchers, the philanthropic sector, and the private sector to develop new tools to measure key outcomes or domains of program quality. A variety of federal agencies — including the Office of Planning, Research and Evaluation, within the U.S. Department of Health and Human Services; the Institute of Education Sciences, within the U.S. Department of Education; the National Institute of Child Health and Human Development within the National Institutes of Health; and the National Science Foundation — fund research on young children’s learning and development. Many existing and emerging tools used to measure quality and learning in early childhood programs grew out of this federally funded research. Other federal entities, such as the White House Social and Behavioral Sciences Team,26 may also have valuable insights to contribute to this work. As a result, the federal government is well-positioned to fund both the validation and the refinement of existing tools for use in Head Start programs, as well as fund the development of new tools in domains where they are lacking.

This work will require collaboration, however. The federal agencies that fund relevant research will need to work together to develop a common research agenda for funding work that addresses gaps in the existing set of tools and knowledge. This would allow these agencies to share information about currently funded projects, pool resources across agencies where appropriate, avoid duplication of effort, and set common expectations for new measurement tools. For example, new measurement tools developed with federal funds should be designed to be valid and reliable for the diverse population of children that Head Start programs serve — including dual-language learners and children with disabilities — regardless of the agency that funds the research. Wherever possible, federal funds should also support the development of assessment tools that measure children’s progress, not just offer a snapshot of skills at one point in time.

Federal research efforts must also be coordinated with philanthropic and industry efforts. Philanthropic funders can typically make spending decisions more quickly and nimbly than federal agencies can, and can take greater risks with their funding. As a result, some stages of the research and development process, such as basic research to inform the creation of tools, or large-scale validation studies of promising models, may be well-suited to federal investment, while others, such as rapid iteration and refinement of new tools, may be better supported by philanthropic or private industry funds. To catalyze private efforts, federal research or philanthropic funds could also support prize-awarding “challenges,” along with traditional contracts or grants, as a strategy for accelerating the identification of outcome-measuring tools. Developing, piloting, and refining of new measurement tools also requires close collaboration among researchers, funders, and practitioners in the field. Finally, making new tools available at scale will likely require partnerships between researchers and commercial publishers or technology companies.

The vast majority of this work must be driven by actors outside the federal Office of Head Start, but OHS can play a valuable role in helping broker collaboration among various stakeholders and groups. For example, OHS could work with the National Head Start Association and other networks of grantees to facilitate collaboration between researchers and Head Start grantees in piloting new measurement tools.

 

 

MONEYBALL FOR HEAD START 22

Order your essay today and save 10% with the discount code ESSAYHELP