California, like most states that have recently adopted new testing systems, will have to undergo a “peer review” a process whereby experts from around the country examine the quality of the exams K-12 students are taking to evaluate their academic gains.
In order for this review to take place, administrators in the California Department of Education have to submit thousands of documents that attempt to demonstrate how they’re complying with federal guidelines.
The tests “have to have a very structured set of elements the (U.S. Department of Education) wants to see to see for the test to have good quality,” said Doug McRae, a retired testing expert.
If the states don’t pass muster, they could be in danger of losing Title I money, which is allocated for low-income students. In 2008, California was subject to a fine because the peer review found issues with the 8th grade math test: some children were taking algebra and other general math, which did not comply with regulations, McRae said.
“You have to have one test, they were dinged on that asked and were asked to revise it,” he said. “It was a contentious issue.”
California had a July deadline to submit documents for its peer review, documents that McRae obtained through a public records act. In the documents submitted, he found more evidence of the criticism he’s been dispensing all along: that the tests were implemented too fast.
“In many schools, in many classrooms they had not been teaching according to the Common Core,” McRae said. “If kids have not had an opportunity to learn, then the test results are not valid. It’s not fair to test kids in materials they have not been taught. That’s a common sense thing.”
Another point McRae takes issue with is that the tests were was not ready to evaluate the academic performance of certain subgroups, including English learners, low income students, and students with disabilities.
“The test itself wasn’t complete for those segments in the population...and the money is intended for those populations,” McRae said, referring to Title I funds.
McRae asked that I post his comments in my blog, so here they are. If you want more, you can get his more technical comments here and here.
SBE Folks, CDE Folks, Interested Others --
Attached is an updated handout (July SBE meeting, Item # 1) on highlights from the CA Peer Review submission to the feds in June, updated to include information from the Smarter Balanced Peer Review submission that I received August 1. Also attached is an updated “Initial Observations” document on this material, to provide the detailed observations from both submissions that led to the highlights on the updated handout.
The attached material provides good context material for the upcoming release of 2016 CAASPP results that include 2016 Smarter Balanced scores. In particular, the updated highlights and observations show that
Opportunity-to-Learn issues (i.e., degree of implementation of Common Core instruction) have not been addressed by either the CDE or SBAC over the past two years, despite indications from SBAC that OTL surveys would be done for both spring 2015 and 2016 test administrations. The lack of information on OTL hampers sound interpretation of SB scores, and underscores a conclusion that it will be 2018 or so before SB test results will become truly meaningful for CA’s students and teachers, schools, districts, and public. I’d also note that the evolution of capability to take tests on computers also contaminates interpretation of Smarter Balanced results, especially for underserved students who most likely have had fewer opportunities to experience technology-based instruction.
The Smarter Balanced Peer Review information “revealed some gaps in item coverage at the low end of the performance spectrum” that clearly led to compromised reliability (or accuracy) of results for low wealth students, EL’s, and SWD’s, especially for the Math tests and especially for the secondary grades most prominently for the HS Math results. This information needs to be taken into account when interpreting 2016 Smarter Balanced scores, particularly comparative information for subgroups across content areas and grade levels.
The concerns that scores from roughly 30,000 students who participated but responded minimally to test questions were excluded from 2015 public aggregate results were not addressed in the Peer Review material. These concerns led to inflated performance level percentages for selected schools and districts for the 2015 results.
-- Doug McRae
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment