Title: An experiment with separate formative and summative rubrics in educational peer assessment
Abstract: Educational peer assessment has proven to be a powerful approach for providing students timely feedback and allowing them to help and learn from each other. In an educational setting, most peer assessment consists of a single round. The problem with this setting is that, either the authors do not have a chance to update their work, which makes the suggestions from their peers useless, or the author can make changes after receiving the peer reviews, which forecloses using peer review to help assign grades. To address these issues, in our classes we now use two rounds of online review, with a different rubric for each. Our Expertiza peer-review system allows the evaluation rubric to vary by rounds. In the first review round, we present a formative review rubric to the peer reviewers. In the formative rubric, we try to encourage student reviewers to look into details, point out the problems they can find in the author's work, and offer insightful suggestions. After the formative review round, authors have the opportunity to submit an updated version of their artifacts. Next comes a summative peer-review round, using a summative rubric. A summative peer-review rubric focused more on evaluating the quality of the artifact by comparing it against specific benchmarks. In this paper, we discuss the design of the two-round peer-review assignments in a computer-science course and present our observations on student peer-review activity. An analysis of students' peer-assessment responses confirms the effectiveness of this design of peer-review activity.
Publication Year: 2016
Publication Date: 2016-10-01
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
Cited By Count: 9
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot