+1 (208) 254-6996 [email protected]
  

This collaborative, iterative process would allow OHS to adopt or refine the performance measures it collects over time as research and the field generate new tools or new lessons about existing ones. New tools or indicators should initially be piloted in a subset of grantees to help OHS and researchers understand how these tools actually work in practice, collect baseline data on the distribution of grantee performance, and identify potential implementation challenges or considerations before adopting tools for program-wide use. As new tools and indicators are rolled out system-wide, they should initially be on a “no stakes” basis. Programs should be required to report information on these measures to OHS, and OHS should report information on grantees’ performance, both individually and in the aggregate, back to grantees and to the broader public. But there should be no consequence attached to new measures in their first two years of system-wide use so that grantees can become accustomed to new tools or indicators before they are used to inform decisions. As better tools for measuring child and family outcomes are adopted, OHS may be able to reduce its use of compliance and input-based indicators to measure program performance, while maintaining a key set of safety, health, and compliance measures to ensure that all Head Start grantees meet minimum standards.

MAKING GRANTEE PERFORMANCE INFORMATION MORE TRANSPARENT

Don't use plagiarized sources. Get Your Custom Essay on
This collaborative, iterative process would allow OHS to adopt or refine the performance measures it collects over time as research and the field
Just from $13/Page
Order Essay

As OHS adopts measures of program performance in key domains, it should publicly report the information on these measures in a transparent and accessible format that supports comparison across grantees on key indicators of program performance. OHS collects a great deal of data from grantees but does not share that data back with grantees, researchers, or the public in a way that informs continuous improvement. For example, OHS posts individual grantees’ CLASS scores and monitoring reports to its website, but this information is presented in a static way that does not allow users to easily access and compare information across multiple grantees, or to connect information from grantees’ CLASS and monitoring reports with other data included in their Program Information Report. Transparent interactive data reports would enable grantees, researchers, policymakers, and other stakeholders to access data on common program performance indicators, track trends in grantee or program-wide performance over time, and compare and analyze data across subsets of grantees serving similar populations. Interactive reports should enable users to sort data by child, family, and community demographic characteristics; grantee size; and other relevant grantee characteristics in order to facilitate constructive comparisons across grantees serving similar populations. Reports should also be designed to allow users to track correlations between different performance measures and trends in program-performance data over time.

OHS should publicly report on the grantee performance data it already collects, including information from Head Start Monitoring reports; CLASS scores; aggregate rates of child attendance; staff qualifications; and percentage of children with health insurance. While these data provide only a partial picture of program performance, they can still provide useful information to grantees, researchers, and other stakeholders. Data should be made available at the grantee, delegate agency, and program-wide level, with appropriate safeguards to prevent disclosure of individually identifiable child-level data. As new measures are developed and adopted for program-wide use, they should be added to transparent, interactive, and public data reports.

As noted above, all Head Start grantees are required to set goals for improving children’s school readiness and to collect and analyze child-level assessment data,27 but programs are free to select the assessments they use to meet these requirements. In the absence of common, valid, and reliable measures of child learning outcomes, transparent, interactive reports should include information from the assessments that programs use to measure children’s learning and

 

 

MONEYBALL FOR HEAD START 23

development, with appropriate information from assessment publishers that places program- reported data in context.

To the extent that grantees collect data on additional measures that reflect their unique context, philosophy, community needs, or population served, they should also be permitted to submit this information for inclusion in transparent, interactive reports. Programs serving dual-language learners, for example, might submit data on children’s development in their home language.

DIFFERENTIATING GRANTEE PERFORMANCE

An effective system of performance measurement should not only collect information on grantee performance but also differentiate multiple levels of performance across various domains. One of the weaknesses of compliance-based systems is that they measure performance in strictly binary terms — either a grantee is meeting a standard or it isn’t. To be sure, there are some minimum standards, such as compliance with civil rights laws, or certain health and safety requirements, that all programs must meet, and where further differentiation is unnecessary or inappropriate. But, across many areas of grantee quality or outcomes, such as parent engagement or teacher-child interactions, the distribution of grantee performance is far more continuous. Understanding where grantees fall along a continuum of performance is often more informative than simply knowing that they met a minimum standard.

Order your essay today and save 10% with the discount code ESSAYHELP