Who evaluates the student evaluation system?

20 May

It is the end of semester and with it comes student evaluations of the courses they have undertaken. These are a chance for module convenors to hear what we did right and wrong so that we can change modules for next time around. It is a chance for students to give their feedback on the whole module, to vent frustration or even to say thanks.

But there is a major problem: participation levels. In the last evaluations for my MA module there was less than fifty per cent participation – and this is good compared with other modules apparently! Many students are simply too busy or disinterested to fill in the online form. So we are left with feedback from a minority of the class and no way of knowing if the feedback we get is representative or not. Quite simply, the evaluation system in my own institution is not fit for purpose because of the appallingly low participation rate.

Why are student evaluation participation rates so low? The answer, in my experience, comes from the shift towards computer-mediated formats. ‘In the old days’ – a mere five years ago – I had near 100 per cent student evaluation participation rates. This was a paper method that was incredibly low-tech but it worked. On the last class of the semester, I would come into a seminar class (usually groups of 10-12 students) armed with the evaluation sheets. I would explain the purpose, distribute the sheets and leave the room for 10 minutes or so. One student would be tasked with collecting the sheets, placing them in an envelope and delivering them to the secretary. I did not touch the sheets or see them being filled in.

The method takes advantage of a captive audience but does not seem coercive. Students were free not to fill in the sheet or to cover it with doodles and drawings of daisies – but few took this option. Since attendance at seminars was usually very high, and because students usually wanted to be at the last seminar of the semester in the case they could glean exam tips, the participation rate was usually 95 per cent and above.

The current system in my own institution uses BlackBoard – the online teaching platform. It is marketed as a one-stop-shop for student interaction with module material. But there is no incentive or disincentive for students to engage with the evaluation process. Email reminders are just one of a large number of automatically generated emails that students receive. Many of these emails invite deletion before they are read.

The institutional rationale for persevering with a system that clearly does not work (in the sense that student participation is woefully low) is that BlackBoard allows the central management of evaluation data. This might be useful for the institution in terms of its audit trail but if it is not actually fulfilling its purpose in terms of informing teaching then it is worth asking serious questions about institutional priorities: technocratic box ticking or teaching quality.

So what is to be done? I will revert back to my tried and trusted paper method and ignore the electronic method. BlackBoard will continue emailing the students. Over fifty per cent of them will ignore it. The institutional box tickers will remain happy with mediocrity. BlackBoard will continue to be paid for a service that does not work.


2 Responses to “Who evaluates the student evaluation system?”

  1. K 23/05/2015 at 12:27 pm #

    With your comment about an audit trail we approach the real purpose of this form of centralised evaluation. not to inform our teaching, but as a management mechanism to observe and regulate behaviour, occasionally pressurise, inculcate competition, and reward or punish. it provides metrics for inter school and faculty battles too, in the allocation of scarce resources. Yes it serves to highlight odious or brilliant teaching, but it has these deeper purposes at the core of its present centralised form. its not feedback in intent; its a cheapo panopticon.

    We run it in a’live’ form, so that you can watch the daily results change, and of course the graph of results is compared to the university average in a handy illustration.

    From my conversations with students, and observing how the live results change, the bulk of responses filter in after coursework is marked….so are some students rewarding us for generous marking or our wonderfully inspirational teaching? In an increasingly metrics driven world filling up with job seeking graduates…you can see how the results may not be true ‘north’.

    having said all that I really like evaluation as we never got the chance to comment in my undergraduate days. Or post grad for that matter. and I’d have had loads to say o my Alma Mater. Loads.

  2. Nathan Jones 10/09/2019 at 9:37 am #

    Great content. Thanks for sharing it with us.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: