It was interesting to see the range of ways in which surveys are being used by universities represented at the conference. The literature identifies three strands of use for surveys and Student Evaluations of Teaching (SET): providing teachers with information to improve their teaching; providing managers with information for making promotion decisions; and providing students with information for choosing modules (Penny 2003). All of these purposes were represented here, though arguably from the array of talks we might also add ‘providing staff with information about student well-being’ to this list. Several speakers (the University of Sunderland and the University of Wollongong) were incorporating this kind of information, or using analytics to identify pinch points for students and to make real-time interventions based on live data.
Perhaps of most interest to me as a researcher was just how beautifully the two talks from the University of Nottingham reflected the dialogue in the wider literature around surveying. On one hand, one talk focused on the use of data as a management tool and improving survey data to give management information about staff. The other talk focused on the ways in which question sets present a narrow view of staff performance and might be counter-productive in the way the use of these questions steers staff development. With all the possible purposes for SETs, we will find that there is conflict in trying to use the same survey to do too many things (Titus 2008). On the whole though, there’s good evidence to suggest that students value surveys most when they can see that their academic teachers are able to use the survey and that both students and staff understand the purpose of the survey (Bassett et al. 2015; Rienties 2014; Edström 2008; Barham and Prosser 1985). This was well encapsulated in the talk from the University of Sunderland, where despite the survey intervention requiring students to answer a survey every week, engagement was still very high – presumably because the benefit was tangible to all concerned. In fact, their surveys were achieving 7.5 times the response rate that the centrally-led programme survey was achieving.
There were good opportunities here to reflect on what excellence is and if it’s something that can be achieved. According to Ben from YouthSight, everything is always improving, so our challenge is to improve fast enough to meet demand and changing standards. But, if we don’t have a clear idea of what excellence is – as we’re warned by the University of Nottingham – we can’t ever move toward it. It is clear though that we should be using our data to its fullest to help light the way (the University of Wollongong and Manchester Metropolitan University).
I personally wondered if we’d ever given similar consideration to what constitutes satisfaction. Given that this is such a central question to so much of our surveying activity, how often do we reflect on what it means, and on if ‘satisfaction’ is really what we’re hoping to produce?
Mostly, I feel like this conference has given me cause to keep reflecting on how we could all be more critical about our use of surveys. My own research foregrounds issues of social justice, but it often feels like even on operational levels we’re missing opportunities to question how we use surveys, what we’re hoping to achieve in surveying, and how we link our strategic ambitions to what actually makes it into survey questions.
The next HEA Surveys Conference is taking place 11 May 2017 and will focus on: Understanding and enhancing the student experience.
Barham, I. and Prosser, M., 1985. Review and redesign: Beyond course evaluation. Higher Education, 14(3), pp.297–306. Available at: http://link.springer.com/10.1007/BF00136110 [Accessed February 10, 2016].
Bassett, J. et al., 2015. Are they paying attention? Students’ lack of motivation and attention potentially threaten the utility of course evaluations. Assessment and Evaluation in Higher Education, pp.1–12. Available at: http://srhe.tandfonline.com/doi/full/10.1080/02602938.2015.1119801 [Accessed December 22, 2015].
Edström, K., 2008. Doing course evaluation as if learning matters most. Higher Education Research and Development, 27(2), pp.95–106. Available at: http://www.tandfonline.com.ezproxy.lancs.ac.uk/doi/full/10.1080/07294360701805234 [Accessed March 30, 2016].
Penny, A.R., 2003. Changing the Agenda for Research into Students’ Views about University Teaching: Four shortcomings of SRT research. Teaching in Higher Education, 8(3), pp.399–411. Available at: http://www.tandfonline.com.ezproxy.lancs.ac.uk/doi/abs/10.1080/13562510309396 [Accessed February 10, 2016].
Rienties, B., 2014. Understanding academics’ resistance towards (online) student evaluation. Assessment and Evaluation in Higher Education, 39(8), pp.987–1001. Available at: http://www.tandfonline.com.ezproxy.lancs.ac.uk/doi/abs/10.1080/02602938.2014.880777 [Accessed December 2, 2015].
Titus, J.J., 2008. Student Ratings in a Consumerist Academy: Leveraging Pedagogical Control and Authority. Sociological Perspectives, 51(2), pp.397–422. Available at: http://spx.sagepub.com.ezproxy.lancs.ac.uk/content/51/2/397.abstract [Accessed February 10, 2016].