Review Process

The OER review focused on two subject areas-11th - 12th grade English Language Arts and Algebra 1/Integrated Math 1. For both reviews, ten reviewers with subject matter expertise and familiarity with the Common Core State Standards in English Language Arts and mathematics were selected and trained. The ELA and mathematics groups worked independently but used the same process described here for pre-work, training, follow-up, and data validation.

Each review group received training prior to initiation of the review period. This section describes the pre-work assigned, the training day, group norming work, and follow-up sessions.

Pre-Work

Reviewers were given pre-work to accomplish before the training day.

We held pre-training webinars for each group to orient participants to their work. The pre-training webinars described OER resources: the reason we were engaging in a curriculum review, big shifts in thinking regarding CCSS, and assigned the following reading in preparation for the training.

Algebra 1/Integrated Math 1 11th Grade English Language Arts

We also introduced them to the three core instruments that would be used in the review: Publishers’ Criteria, Achieve EQuIP Rubric, and Achieve OER Rubric.

Reviewer Training

Each group attended a full day, in-person training session for their subject matter. During the day, participants spent most of the morning exploring the biggest shifts in instructional practice needed to teach within CCSS for ELA and mathematics. The goal was to understand the changes from former standards approaches, including what to look for in aligned curriculum.

Next, we spent time examining the resources being reviewed and ensured that participants could access the sites where they was located and understood the criteria for selection.

Participants were introduced to the five instruments being used for the review.

Team leaders explained the use of the instruments, why they were being used, and how they complement each other with relatively little overlap.

Participants individually assessed the selected practice unit using each of the rubrics and submitted their training data electronically, as described more fully below.

Team leaders addressed participant questions, randomly assigned work to the reviewers, and addressed administrative details.

The evaluation at the end of the day showed that participants knew and understood what they were supposed to do, why they were doing the work, and how to get help when they needed it.

Group Norming

Using the selected practice unit, participants reviewed the OER materials using the five review instruments.

Previous testing with the instruments showed that a typical review for mathematics would take 5-8 hours to complete and 2-4 hours for ELA. Reviewers understood that the first review might take longer but that subsequent reviews should fall into that range. During the first check-in meeting, when a majority of the reviewers had more than one review complete, they confirmed their experience matched this expectation.

Then, participants compared their responses to others in their small groups and ultimately to the larger group. As part of the practice review, we compared individual results, discussed differences, and provided better clarity on use of the instruments and expectations for their individual work.

Check-in Meetings

The OER facilitation team (OSPI and Relevant Strategies) set up two check-in meetings to measure progress during the month-long review process. The purpose of the check-in meetings was to identify and answer questions that arose among the reviewers, seek congruence on approaches to evaluating the materials, and identify high-variance items.

Reviewers were asked about their initial experiences evaluating materials, including the amount of time spent and advice for other reviewers.

At the check-in meetings, after reviewer questions were addressed, we identified items where there was high variance in the responses on individual scored questions in the rubrics. While overall there were very few instances of high variance, the process drove out some lingering misconceptions about how to apply certain rubrics.

When a high variance item was uncovered, participants were notified about the variance via email. The relevant data from all reviewers was included in the email. Participants received clear direction that the purpose of the email alert was to inform the group about the high variance in a particular response. They were given the opportunity to review other’s comments and scores and to collaborate with the group to identify and understand the rationale for the different responses. Participants clearly understood they could keep their existing scores, but if they had missed something in their review or had misunderstood how to evaluate a particular item in a rubric, they had the opportunity to adjust their score.