<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

Method to assess early intervention programs for America’s youngest is flawed

Default sub title

minute read

by Nathan Gill | January 18, 2018
placeholder

The US Department of Education’s Office of Special Education Programs (OSEP) should change the way it evaluates state early intervention programs for infants and toddlers with developmental delays, according to research from the University of Colorado Anschutz Medical Campus.

In a study published in the American Journal of Evaluation, researchers from the CU School of Medicine and the Colorado School of Public Health at CU Anschutz show that the evaluation method OSEP uses to gauge effectiveness of early intervention (EI) programs for children under three years of age is scientifically invalid and produces misleading results.

“Millions of dollars and thousands of hours have been put into the OSEP evaluation process and states are being encouraged to use the child outcome results produced by this design to inform their efforts at quality improvement,” said Steven Rosenberg, lead author and associate professor of psychiatry at the University of Colorado School of Medicine at the CU Anschutz Medical Campus in Aurora. “Neither the states nor OSEP seem to understand that the results of their evaluation process should not be used to assess the quality of early intervention services.”

Judging the quality of state programs

OSEP administers Part C of the Individuals with Disabilities Education Act (IDEA) which authorizes the provision of EI services in the US for children ages 0-3 who have developmental delays. Early intervention programs are administered at the state level and evaluated using OSEP’s evaluation process. OSEP reports these evaluation results to Congress to demonstrate the effectiveness of Part C EI nationally. These findings are also used to judge the quality of state programs.

OSEP’s evaluation uses a design called a single group pre-post comparison to assess program outcomes. Investigators say it is a poor method of evaluation because the evaluation design cannot distinguish between child progress produced by EI and changes that result from normal variability in child growth.

Researchers say that to justify the use of a single group pre-post design, OSEP has had to assume that children's delays only improve in response to treatment and considers all child progress to be evidence of effectiveness. Not so, say the authors of this article.

“Real babies show variability in their rates of skill acquisition,” Rosenberg said. “OSEP’s approach incorrectly assumes that all improvements are due to intervention – that’s not the way young children work.”

“States need to understand the problems with this process -- it shouldn’t be used to compare one state to another,” he said.

To show how drawing valid information about EI program effectiveness using the single group pre-post comparison evaluation design is flawed, the researchers examined the development of about 1,100 infants in a national sample of children who did not receive EI services. About 80 percent of children who had delays at nine months were found to have no delays at 24 months, even though they did not receive EI. The authors believe that the fact that children who received no EI appeared to make substantial progress raises serious questions about the extent to which the gains reported for children who received EI can be attributed to the effects of early intervention.

“The fact that Part C EI programs are expected to draw conclusions about their program’s quality based on the results of a single group pre-post comparison design does a disservice to Part C administrators, practitioners, families and other stakeholders who are invested in improving outcomes for infants and toddlers with developmental delays,” the authors wrote.