There are multiple approaches to evaluation and many are related to predicting outcomes in training or to issues of Quality Improvement. As professionals, medical educators aim to do their job well and benefit from evaluating what they do. At a higher level, Program Evaluation is an important issue. At all levels, evaluation helps you decide where to focus energy and resources and when to change or develop new approaches. It also prevents you from becoming stale. However, it needs curiosity, access to the data, expertise in interpreting it and a commitment to acting on it and there needs to be organisational support.
Doing it better next time
So, at the micro level, I get asked to give a lecture on a particular topic, to run a small group or to produce some practice quiz questions for exam preparation. How do I know if I do it well or even adequately? How can I know how to do it better next time?
There are many models of evaluation, particularly at higher levels of program evaluation (if you are keen you could look at AMEE guides 27 and 29 or this http://europepmc.org/articles/PMC3184904 or https://www.researchgate.net/publication/49798288_BEME_Guide_No_1_Best_Evidence_Medical_Education ). They include the straightforward Kirkpatrick hierarchy (a good example of how a 1950’s PhD thesis in industry went a long way) which places learner satisfaction at the bottom, followed by increased knowledge then behaviour in the workplace and, finally, impact on society – or health of the population in our context. There are very few studies able to look at the final level as you can imagine.
Some methods of evaluation
The simplest evaluation is a tick box Likert Scale of learner satisfaction. Even this has variable usefulness depending on the way questions are structured, the response rate of the survey and the timeliness of the feedback. The conclusions drawn from a survey sent out two weeks after the event with a response rate of 20% are unlikely to be very valid. Another issue with learner satisfaction is the difference between measuring the presenter’s performance versus the educational utility of the session. I well recall a workshop speaker who got very high ratings and who was a “brilliant speaker” but none of the learners could list anything that they had learnt that was relevant to their practice. You could try to relate the questions to required “learning objectives” but these can sometimes sound rather formulaic or generic. It is certainly best if the objectives are the same as those intended by the presenter and they should be geared towards what you actually intended to happen as a result of the session. When evaluating you need to be clear about your question. What do you want to know?
If you add free comments to the ratings with a request for constructive suggestions you are likely to get a higher quality response and one that may influence future sessions. It is also possible to ask reflective questions at the end of a semester about what learners recall as the main learning points of a session. After all we are really wanting education that sticks!
Another crucial form of evaluation is review with your peers. Ask a colleague to sit in if this is not a routine happening in your context. Feedback from informed colleagues is very helpful because we can all improve how we do things. It is hard to be self-critical when you have poured a large amount of effort into preparing a session and outside eyes may see things we cannot.
To progress up the hierarchy you could administer a relevant knowledge test at a point down the track or ask supervisors a couple of pertinent questions about the relevant area of practice.
Trying out something new
If you want to try an innovative education method or implement something you heard at a conference it is good practice to build in some evaluation so that you can have a hint as to whether the change was worth making.
An example
A couple of years ago I decided to change my Dermatology and Aged Care sessions into what is called Flipped Classroom so I put my powerpoint presentations and a pre workshop quiz online as pre-viewing for registrars. I then wrote several detailed discussion cases with facilitator notes for discussion in small groups. I did a similar style with a Multmorbidity session where I turned a presentation into several short videos with voice over and wrote several cases to be worked through at the workshop.
I wanted to compare these with the established method so I compared the ratings to those of the previous year’s lecture session (the learning objectives were very similar). Bear in mind there is always the problem of these being different cohorts. I also asked specific questions about the usefulness of the quiz and the small group sessions and checked on how many registrars had accessed the online resources prior to the session. It was interesting to me that the quiz and the small groups were rated as very useful and the new session had slightly higher ratings in the achievement of learning objectives. Prior access to the online material made little difference to the ratings. I also assessed confidence levels at different points in subsequent terms. In an earlier trial of a new method of teaching I also assessed knowledge levels.
Education research is often “action research”. There is much you can’t control and you just do the best you can. However, if you read up on the theory, discuss it with colleagues and see changes made in practice then it all contributes to your professional development. Sharing it with colleagues at a workshop adds further value.
Some warnings
Sometimes evaluations are done just because they are required to tick a box and sometimes we measure only what is easy to measure. Feedback needs to be collected and reviewed in a timely fashion so that relevant changes can be made and it is not just a paper exercise. There is no point having the best evaluation process if future sessions are planned and prepared without reference to the feedback. It would be good if we applied some systematic evaluation to new online learning methodologies and didn’t just assume they must be better!
Evaluation is integral to the Medical Educator role
A readable article on the multiple roles of The Good Teacher is found in AMEE guide number 20 at http://njms.rutgers.edu/education/office_education/community_preceptorship/documents/TheGoodTeacher.pdf
Evaluation is a crucial part of the educator role and the educator’s role is diminished and the usefulness of any evaluation is curtailed when the two (education and evaluation) are separated. Many things have an influence on training outcomes including selection into training, the content and assessment of training and the processes and rules around training. As an educator you may have increasingly less influence over decisions about selection processes and even over the content of the syllabus. However, you may still have some say in what happens during training. I would suggest that the less influence educators have in any of these decisions the less engaged they are likely to be.
At the level of program evaluation by funders, these tasks are more likely to be outsourced to external consultants with a consequent limitation in the nature of the questions asked, a restriction in the data utilised and conclusions which are less useful. “Statistically significant” results may be educationally irrelevant in your particular context.. Our challenge is to evaluate in a way which is both useful and valid and helps to advance our understanding as a community of educators. A well thought out study is worth presenting or publishing.