The OER review focused on two subject areas at the middle school level: English Language Arts and mathematics. For both reviews, ten reviewers with subject matter expertise and deep familiarity with the Common Core State Standards in English Language Arts and mathematics were initially selected and trained. ELA and mathematics groups worked independently but used the same process described here for pre-work, training, follow-up, and data validation.
Each review group received training prior to initiation of the review period. This section describes the pre-work assigned, the training day, group norming work, and follow-up sessions.
Reviewers were given pre-work to accomplish before the training day. We held two pre-training webinars for each group to orient participants to their work. The introductory orientation webinar, attended by both math and ELA reviewers, described OER, clarified review goals, detailed the resource selection criteria, and unpacked results from the 2013 and 2014 reviews.
Content area specific webinars addressed the big shifts regarding CCSS and explored the shifts in instructional practice needed to support authentic CCSS implementation for ELA and mathematics. The goal was to ensure all reviews had a common lens and understanding of the CCSS and what to look for in aligned curriculum.
These content-specific webinars also introduced reviewers to the three core instruments that would be used during the review (IMET, EQuIP, Achieve OER) and assigned the following reading in preparation for the training.
|Middle School Mathematics||Middle School English Language Arts|
Focus Documents – Achieve the Core
Review of 5th grade standards and High School CCSS
Review the Achieve EQuIP rubric (video)
Following the virtual sessions, reviewers received a link to one of their assigned resources in advance of the in-person training in order to get a high-level overview of the material and become familiar with the navigational structure. They also received a copy of the Common Core Worksheet to help frame their initial walkthrough of the material. This preliminary work with the resource allowed us to spend more time involved in deep group discussion of the application of the rubrics to the resource during the in-person training.
Each group attended a full day, in-person training session for their subject matter. In small groups, participants worked with each of the five instruments being used for the review, using their first assigned resource as fodder for discussion. OSPI facilitators explained the use of the instruments, why they were being used, and how they complement each other with relatively little overlap.
Facilitators addressed participant questions, assigned resources to the reviewers for the 4-week virtual review period, and covered all administrative details. The evaluation at the end of the day showed that participants knew and understood what they were supposed to do, why they were doing the work, and how to get help when they needed it.
Using the selected practice unit, participants reviewed the OER materials using the five review instruments. Scoring for the first resource was discussed as a group but ultimately, all responses were individual. This face-to-face time was important in order for all reviewers to have a shared understanding of use of the instruments, application of the criteria, and expectations for their individual work.
Previous testing with the instruments showed that a typical review for mathematics would take 6–9 hours to complete and 3–4 hours for ELA. Reviewers understood that the first review might take longer but subsequent reviews should fall into that range. During the first check-in meeting, when a majority of the reviewers had more than one review complete, they confirmed their experience matched this expectation.
The OER facilitation team set up three check-in meetings each for math and ELA reviewers to measure progress during the four-week review process. The purpose of the check-in meetings was to identify and answer questions that arose among the reviewers, seek congruence on approaches to evaluating the materials, and identify high-variance items.
Reviewers were asked about their initial experiences evaluating materials, including the amount of time spent and advice for other reviewers.
At the check-in meetings, after reviewer questions were addressed, we identified items where there was high variance in the responses on individual scored questions in the rubrics. While overall there were very few instances of high variance, the process drove out some lingering misconceptions about how to apply certain rubrics.
When a high variance item was uncovered, participants were notified about the variance via email. The relevant data from all reviewers was included in the email. Participants received clear direction that the purpose of the email alert was to inform the group about the high variance in a particular response. They were given the opportunity to discuss their comments and scores during the check in meetings or via email. Participants clearly understood they could keep their existing scores, but if they had missed something in their review or had misunderstood how to evaluate a particular item in a rubric, they had the opportunity to adjust their score.
This report is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.