Describe the purpose for the review, stressing how it fits into program assessment plans. Explain that the purpose is to assess the program, not individual students or faculty, and describe ethical guidelines, including respect for confidentiality and privacy.
Describe the nature of the products that will be reviewed, briefly summarizing how they were obtained.
Describe the scoring rubric and its categories. Explain how it was developed.
Explain that readers should rate each dimension of an analytic rubric separately, and they should apply the criteria without concern for how often each category is used.
Give each reviewer a copy of several student products that are exemplars of different levels of performance. Ask each volunteer to independently apply the rubric to each of these products, and show them how to record their ratings.
Once everyone is done, collect everyone’s ratings and display them so everyone can see the degree of agreement. This is often done on a blackboard, with each person in turn announcing his/her ratings as they are entered on the board. Alternatively, the facilitator could ask raters to raise their hands when their rating category is announced, making the extent of agreement very clear to everyone and making it very easy to identify raters who routinely give unusually high or low ratings.
Guide the group in a discussion of their ratings. There will be differences, and this discussion is important to establish standards. Attempt to reach consensus on the most appropriate rating for each of the products being examined by inviting people who gave different ratings to explain their judgments. Usually consensus is possible, but sometimes a split decision is developed, e.g., the group may agree that a product is a “3-4” split because it has elements of both categories. You might allow the group to revise the rubric to clarify its use, but avoid allowing the group to drift away from the learning outcome being assessed.
Once the group is comfortable with the recording form and the rubric, distribute the products and begin the data collection.
If you accumulate data as they come in and can easily present a summary to the group at the end of the reading, you might end the meeting with a discussion of four questions:
What do the results mean?
Who needs to know the results?
What are the implications of the results for curriculum, pedagogy, or student support services?
How might the assessment process, itself, be improved?
Employer’s View of Assessment & Evaluation
The American Association of Colleges and Universities has recently (November & December 2007) conducted a survey of employers view learning and assessment and ways for improvement. Employers were requested to provide information on where to focus resources to assessment student learning, key areas where students are expected to be knowledge or have skills, assessment techniques that show student capacities and assessment information employers see as valuable for assessing student potential.
Employers Advise Colleges Where To Focus Resources To Assess Student Learning