The annual HEA Surveys Conference was held on 11 May 2017 at Manchester Conference Centre. The conference was attended by 130 delegates from HEI’s across the UK all keen to hear about, and share their insights and experiences of, the use of surveys and other metrics in understanding and enhancing the student experience. The day was packed with presentations and workshops, which were accompanied by lots of discussion and energetic debate. There was even some sun (yes, in Manchester!) for the delegates to enjoy whilst mingling with each other in the outside courtyard.
Some key themes emerged from the conference, which are covered below; including:
- Student wellbeing and psychology
- Survey fatigue
- Standardised versus bespoke surveys
- Proceeding with caution
Student wellbeing and psychology
The conference commenced with a keynote speech by Nick Hillman, Director of the Higher Education Policy Institute. During his speech, Nick talked about wellbeing and highlighted how student wellbeing levels are considerably lower than the general population’s. Student wellbeing was also a key theme to emerge from subsequent elements of the conference, where it was clear that it is a hot topic that is perhaps underrepresented in national student experience surveys.
Continuing with the psychological theme, other presentations focused on psychological constructs (such as student identity and self-confidence) as a way of mediating the relationship between what higher education institutions provide students, on the one hand, and student satisfaction and experience, on the other. Understanding the psychology of students has practical implications for HEI’s in enhancing student satisfaction. For example, a presentation by Louise Bunce from Oxford Brookes University spoke of how students who had a ‘consumer’ identity had lower satisfaction than students with a ‘learner’ identity. Her presentation also spoke of how HEI’s can influence the type of identity that students hold. As with wellbeing, it was felt that psychological constructs are underrepresented in student experience surveys.
Another theme to emerge from the conference was about students being over-surveyed, resulting in survey fatigue and low response rates. Some presentations focused on how institutions manage this in order to get good response rates for the ‘high stakes’ surveys. Ideas such as a dedicated survey season and student incentives were discussed, as well as having a unified approach to survey implementation within a HEI and engaging academics in the survey process.
There appears to be a tension between wanting to ask students more (e.g. about wellbeing and other psychological constructs) but not wanting to overburden them by asking them too much. This is something that both HEIs and survey providers, such as the HEA, reflect on and strive to balance.
Standardised versus bespoke surveys
Another related tension in the use of survey data is around standardised versus bespoke surveys. This was debated at the conference in relation to module feedback, and whether this should be delivered through centralised or decentralised mechanisms. Centralised module feedback benefits from providing a strategic overview but can be too simplistic as a quality assessment tool. Conversely, decentralised feedback enables context specific evaluation that can be used for enhancement but does not provide a strategic overview. Again, here at the HEA this tension is something we are only too familiar with, and something we aim to balance, given that we provide national benchmark data.
Proceeding with caution
Finally, a theme to emerge from our afternoon keynote speech by Mantz Yorke, and from other sessions throughout the day, was around survey methodology and the interpretation of survey results. The importance of choosing the right metrics was emphasised, as well as not making leaps of faith based on wrong measurement. With regards to the interpretation of survey results, as smaller units of analysis are reported on and the numbers are low, margin of error is likely to be high, so the need to treat results with caution was underlined. As survey providers, this is something we endeavour to make sure users of our surveys are aware of.
So, all in all it was a thought provoking day that had both methodological and practical implications for those involved in student surveys and metrics. From our perspective, it was good to listen and talk to our colleagues at the ‘coal face’ and hear about the challenges and successes in implementing surveys and their findings. It was also good to hear about what is needed from a surveys’ perspective, which we will reflect on, and reassuring to know that there is still a demand for the post-graduate voice. Through the work we do we hope to promote an active student survey community and the surveys conference was testament to this.