Getting to Know Coursera: Peer Assessments
by Katie McEwen, graduate assistant
After getting started talking about assessment methods in Coursera, in general, today we’ll turn our attention to peer assessments, in particular. Peer assessments are designed to evaluate the kinds of unstructured output—essays, projects, videos, music, art, design challenges, etc.—a student might reasonably be required to complete in a traditional course. But it is precisely these kinds of open-ended assignments that pose serious problems in a massive online setting more suited to automatically graded quizzes and programming assignments. Who, after all, has got the time to read 10,000 essays?
The answer, for Coursera at least, is other students. And while peer assessments have garnered a fair bit of attention, they’ve also quickly enough underscored some of the more pervasive, and truly difficult, issues faced by the Coursera model. Plagiarism is perhaps only the most obvious one. Peer assessments—the ways they work, they ways they don’t—raise serious issues about creating and cultivating community online; navigating questions of authority and language proficiency; and about grading as a reflective practice.
Laura Gibbs, an experienced teacher of online courses, provides a thoughtful response to her own experience with peer feedback in the recent “Fantasy and Science Fiction” class on her blog Coursera Fantasy. There, she also addresses some of deeply problematic aspects of grading in Coursera.
Unlike automatically graded quizzes and programming assignments, peer assessments require a good-faith effort on the part of each student not only to submit original work in the proper format and the proper language (still largely English), but also to then anonymously evaluate the work of others attentively and constructively. So for each assignment submitted in a course, students are generally then asked to evaluate the work of up to 4 or 5 peers. That’s not a negligible amount of work or time, especially in those courses with weekly or every-other-week peer-assessed assignments.
For example, students are asked to write short essays in “A History of the World since 1300;” or work through a series of project briefs in “Human-Computer Interaction;” participate in assignments and design challenges in “Design: Creation of Artifacts in Society;” or formulate final projects for “Introduction to Sustainability.” David Owens, a professor at Vanderbilt’s Graduate School of Management, will try out group projects in his upcoming Coursera course on “Leading Strategic Innovation in Organizations.”
As we can see, peer assessment is a part of the course requirements for a wide spectrum Coursera courses across disciplines, not just those dedicated to literature. In fact, of the 50 Coursera courses opened between June to October 2012, 14 (or 28%) required at least one peer-graded assessment. And four of those 14 courses required only peer-graded assessments, with some unexpected titles in the mix: “Health Policy and the Affordable Care Act,” “Fantasy and Science Fiction,” “A History of the World since 1300,” and “Computer Architecture.”
Given this, it’s important to keep in mind that feedback on Coursera is anonymous. That means you don’t know whose work you’re reviewing or who’s reviewed your work. This makes any actual discussion of the feedback essentially impossible. Want to follow up on a comment? Or continue the discussion? Not easy in a class of 50,000. “Modern and Contemporary American Poetry” gets around this structural problem by asking students to post their completed and graded essays in the forum (in addition to the peer-assessment system) for more feedback and more engaging discussion.
But already, this question of anonymity in Coursera—which protects privacy while making it impossible to ask questions, or engage in a direct conversation, about the feedback—points to larger issues of how privacy and pedagogy intersect online. How do we create sustainable online learning communities in Coursera if students are not accountable to their peers or for their feedback?
And when are students supposed to learn how to grade?
It generally seems that the piece most often missing in peer assessments is not good-faith effort on the part of most students to submit and evaluate work. Rather, it’s that many (or even most) students simply do not have experience in evaluating the work of others. And how could they? While some courses offer peer-assessment training, this doesn’t seem to have yet bridged the skill gap. So what are students really getting from this peer feedback? Is it helping them to write better essays or to create more complex projects?
Part of the problem is, of course, that grading is difficult no matter the medium: online or face-to-face, seminar or lecture. Neither is it a problem limited to students: many instructors likewise lack formal training in evaluating student work.
And, here, we run up against another unspoken assumption at work in Coursera: that grading is a relatively transparent, relatively straightforward process that can be “learned” quickly enough and well enough to be effective online. To guide students, Coursera encourages instructors to develop detailed rubrics for evaluation. Coursera also presents some limited data suggesting that their rubrics have improved over time. This data, or rather the conclusions drawn from it, is far from conclusive. It could be that it is the students’ ability to grade effectively improves over time, or even that only the more dedicated and skilled students continue to participate in peer grading.
Certainly, I don’t doubt the value of peer grading. Nor do I doubt the ability of some, perhaps even many, Coursera students to grade effectively and insightfully. I do, however, doubt that this is what actually happens in Coursera. And initial anecdotal evidence — drawn from my own research into common practices in Coursera, as well as the experiences of others (see here, here, here, and the comments here) — would seem to indicate that it is not.
Because as we know, grading is often one of the most difficult aspects of teaching. It is a reflective practice, like any other we undertake in the classroom, changing over time, and requiring dedication, energy, and engagement. Ideally, it would also include a commitment to helping others learn and improve. How do we work to cultivate this kind of community culture online? And what might we need to do differently to facilitate community online?
The real sticking point, for me, isn’t simply the issue of students grading effectively or ineffectively online. Rather, it’s that Coursera doesn’t quite acknowledge that there is an implicit pedagogy or ideology at work on the platform. Which suggests that grading is work to be outsourced, that the division of academic labor operating in most large university programs in the US, like those where the Coursera founders work—in which professors teach, graduate students (or machines) grade—is one worthy of replication online.
By thus separating expertise and grading, Coursera would seem to rely on an impoverished conception of grading, which privileges international perspectives over expert critique. The model of peer assessment supported by Coursera folds together two assumptions: that peers can approximate or replace the kinds of substantive, constructive expert feedback critical to deeper understanding and that a grade is necessary to learn, full stop. Even when credit is not granted.
So although there is no shortage of innovative projects assigned in Coursera courses, which ask students to apply and expand their knowledge in exciting, creative, and challenging ways, there is still a lack of sustained conversation around what grading, or peer assessment, means in this online environment.
Next time, we’ll continue our discussion to take a look at one outcome of grading in Coursera: certification.
Image: “Score Cards,” Marcus Hodges, Flickr (CC)
Leave a Response