We generally state that we should evaluate our teaching. It’s a good thing. But how meaningful are some of these exercises?
Just recently I saw a link on twitter to this delightful article entitled “Availability of cookies during an academic course session affects evaluation of teaching” https://www.ncbi.nlm.nih.gov/pubmed/29956364 If this had been published in December in the BMJ I might have thought it was the tongue-in-cheek Christmas issue. Basically it reported that the provision of chocolate cookies during a session in an emergency medicine course affected the evaluations of the course. Those provided with cookies evaluated the teachers better and even considered the course material to be better and these differences were reported as statistically significant. Apart from making you think it might be worth investing in some chocolate chip cookies, what other lessons do you draw from this fairly limited study?
There were a few a few issues raised by this for me. Firstly it made me reflect on the meaningfulness of statistically significant differences in the real world. Secondly it made me quite concerned for those staff in organisations such as universities where weight is sometimes placed on such scores when staff are being judged. This is because, thirdly, it brought to mind the possible multiple confounders in attempts at evaluation.
When considering your evaluation methods, consider various issues: how timely is it if it is participant feedback? If you intend to measure something further down the track such as knowledge acquisition, at what point is this best done? Question the validity and reliability of the tools being used. Consider using multiple methods. Consider how you might measure the “hard to measure” outcomes. If programs (rather than just individual sessions) are being evaluated then ratings and judgments should be sourced more broadly. I will not go into all the methods here (there is a good section in Understanding Medical Education: Evidence, Theory and Practice ed. T Swanwick) but the article above certainly raised points for discussion.
Surprisingly, student ratings have dominated evaluation of teaching for decades. If evaluations are in an acceptable range, organisations may unwittingly rest on their laurels and assume there is no need for change. Complacency can limit quality improvement when there is an acceptance that the required boxes have been ticked but where the content of the boxes is not questioned more deeply.
More meaningful quality improvement may be achieved if evaluation methods are scrutinised more closely in terms of the relevant outcomes. Outcomes that are longer term than immediate participant satisfaction could be more pertinent and some quality measures may be necessary over and above tick box responses. The bottom line is, consider the possible confounders and question how meaningful your evaluation process is.