After a successful career in corporate communications, most recently heading the Social Media function at the Tata group (a global company operating in more than 100 countries), Sylvia is currently undertaking a PhD research at Lancaster University within the Department of Leadership and Management. Her thesis explores the link between performance management systems and processes and ethical attitudes and behaviours in organisations within specific cultural contexts.
I was attending a teaching practice-related seminar recently. The focus was to discuss how to develop students’ evaluative judgement. The thought was that the better the students could critically evaluate their own work, the better they would be in a position to improve it, and close the gap. This ability would also be helpful when they moved into industry as they would be expected to evaluate their own performance and take ownership for their own development, so to speak.
The discussion veered to how students generally tend to not have a good idea about the grade they should expect on an essay. They also seem to be rather disinterested in the feedback when they do receive it, especially if it happens to be at the end of a module when they are moving on to a new one with a new tutor and a new assessment. The feedback for its part tends to be rather superficial in nature with more emphasis on aspects easily fixed such as referencing rather than highlighting any complex issues.
Self-reviews and peer reviews were discussed as some sort of solution that could be used to improve student engagement with the evaluation process. For example, a student marks his or her own essay and exchanges it with another student to get an idea of how they would grade it, and this would in turn lead to reflection on the part of both students. This could be followed up by a discussion with the tutor to get an expert perspective. Instead of actual essays, sample essays could be used to encourage more honest evaluation and interaction.
All this made me think about my own perspective on the matter…
It seems to me that a discussion of ‘evaluation’ cannot be non-contextual. ‘What’ is being evaluated to my mind is as much core to the evaluation as ‘who’. While the seminar touched upon examples of lab reports as well as essays in the evaluative judgement context, in my view they are completely different animals requiring completely different evaluation approaches and maybe even skills. Though not an expert on lab reports, my guess is that a good quality lab report may be one that is accurate, clear, consistently applies set standards, and not open to interpretation; in a sense, it may not be difficult to objectively rate the quality of the report against established criteria. Essays, on the other hand, may be difficult to evaluate against known standards because, for example, something like creativity and originality is about pushing known standards and not about following prescribed rules. While there are general rules about how a good introduction or conclusion in an essay should be, if I don’t follow those rules and yet make an impact, this would ideally not detract from my grade or evaluation. However, whether it makes an impact on the reader or evaluator may be as much about the evaluator’s personal preferences, style, values...as much as it is about the quality of the piece itself. Even if the rubric lays down criteria and standards for essays in detail, subjectivity would be unavoidable, because a rubric may lay down a standard such as for example “excellent critical thinking and reflection”, but what is excellence in critical thinking and reflection to you may not be to me.
I personally feel that to develop students’ evaluative judgement about their own work, what might be most needed is for them to see more consistency in the feedback they receive from tutors. One reason students don’t care about the feedback once the module is completed may be because they do not consider it useful in improving the quality of their work as seen from an assessor’s perspective. If there is a lack of consistency in evaluation among different tutors across different modules, how do we expect students to match up to the evaluation of any given tutor when they do not even have the same level of expertise or experience?
Another issue is that teachers often try to dilute their feedback with positive comments to be sensitive to the students’ feelings or to not demotivate them entirely. I have found myself trying to think of the politest way to communicate feedback in which process I might have dulled the blow to an extent that it ceased to be a blow at all. Delivering a more accurate picture of the strengths and failings of students’ work may help them more in the long term…and may also help create a better foundation for developing their evaluative judgement.
Most importantly, students need to be taught to appreciate the value of critical feedback and how to see critical feedback as a golden opportunity as it is a pathway to improvement. This would however work only if we teachers school ourselves to giving critical feedback in the great amount of detail that is necessary for it to be constructive...this would also give students an insight into the thought process that goes into evaluation.
I believe that it is we teachers, maybe even from different departments within the same discipline, who need to come together more often to discuss our evaluation styles and approaches so that students see more consistency in the feedback and grades they receive. This, of course, sounds easier than it really is: as mentioned earlier, there is a subjective element in marking essays that will and indeed must never go away…but with the kind of experience and expertise we teachers possess, it is probably far more up our alley to close the gap to the extent that we can.
For further information on our work on assessment and feedback please click here.