Monthly Archives: August 2016

The curriculum walks through the door – sometimes

This was a favoured concept in the days before college curricula existed (hard to believe, I know). Certainly a gifted teacher can turn any clinical encounter into a multi-faceted learning opportunity (often done on teaching visits). Plus, the self-directed learner will fill in their perceived gaps and GPs are the sort of doctors who can turn their hand to anything (a uni colleague of mine headed out bush after a single RMO term in obstets and delivered babies for years). But those were the good old days and the assumptions were a bit idealistic. The pendulum has swung decisively in the opposite direction.

Pisa-2006- 051The question remains unanswered regarding how much clinical exposure / experience is adequate in various areas but this does not stop people making policies. There is the much repeated “ten thousand hours” to become an expert and proceduralists in the US assert that shorter working hours by trainees means that specialty training should be longer.  Well, no one is going to win that argument as regards Australian GP training as long as the government is effectively in charge of training.

In GP training clinical exposure varies greatly because of the variability of practices. A registrar might do only 18 months in general practice (or less with ACRRM) and might work in only two locations. Previously we collected end of term registrar feedback data on multiple aspects of the practice experience and could state with confidence which practices were at one end of the distribution curve of older patients or kids, for instance. Sometimes there were surprises.  Registrars may do far fewer minor procedures in a rural practice with a part-time surgeon than in a keen urban practice. Sometimes you have to dig deeper than the statistics. All this enabled evidence-based training and was useful in advising choices for subsequent placements – but only if educational priorities are as high as training location priorities.  In any case, such data is now lost in databases which are defunct due to the recent changes to training.

Growing with your patients

Still, does it really matter in the long term?  Currently, in the climate of doom saying about population demographics and health system “sustainability”, there is a lot of angst around about how registrars see fewer older patients than established GPs.  Is anyone surprised?  Even BEACH figures showed that younger (Fellowed) GPs see fewer older patients – and someone still needs to see the kids!  My second GP job was in an inner city practice whilst still working weekends in ED for two years.  I was happy with acute presentations, became something of an expert in STIs and contraception (and some later occupational health) and augmented this by doing the FPA course in my holidays.  I left there to move to the suburbs and a group practice to do more “family medicine” in a practice with two GP obstetricians.  Northumb-2006- 059The years passed and I did some extra dermatology and paeds, a women’s health course  and some research into menopause.  I didn’t do a lot of extra mental health because I had trained as a social worker and worked as a counsellor. If we had moved to the country I would have upskilled again in emergency medicine. I then did a geriatrics course followed by a PhD on frailty.  You can see where this is heading. Some years later, for curiosity, I looked at the demographics of “my patients” – those who generally only saw me (no mean feat given I am part time in clinical practice).  Their mean age was 60 – which, fortuitously, was my age!

Let’s face it, aged care will require a whole new set of knowledge and skills in twenty years and GPs will be up-dating most of what they learnt as registrars.  Oh dear, all that wasted time learning about how to bill GP Management Plans!

Recognising the curriculum knocking on the door – relevant up-sEdinburgh-2006- 154killing

I have taught aged care for twenty years (and I will talk about teaching aged care in a later post) and am keen on it but that doesn’t mean I think all registrars need to see a lot of it or be as enthused as I am.  General practice is dynamic across a lifetime and we need to encourage registrars to recognise community needs and do something about their gaps.  They need to know the basics and take responsibility for the patients they see.  “Just a script” should be the chance to reflect on polypharmacy, rational prescribing and de-prescribing.  Multimorbidity is not limited to older patients so experience can be gained with younger age groups.  of course, exams need to be passed.

Registrars all have different back stories and maybe the ex-geriatric registrar actually needs to see more kids and sports injuries. Maybe the ex-orthopaedic registrar needs to do more mental health.  Maybe they can be directed to useful extra curricular courses and CPD to set a pattern for lifelong learning.  There is nothing like a bit of extra knowledge to open our eyes and help us to see patient problems we overlooked before and to address them more effectively.

As educators and supervisors we have the opportunity to (hopefully) individualise the vast resources that are the curricula and to go a bit beyond the mandatory syllabuses that need to be ticked off.

Given the brevity of GP training, and the breadth and dynamism of general practice, a disposition to ongoing professional development is the crucial priority.

Small groups

I am fresh from a week of watching comedy shows at the Edinburgh Fringe and it made me think of my earlier post about successful presentations to larger groups and how some of this “success” relates to being a performer, even though the size of some of the Fringe venues were about the size of my bedroom and had space for a large small group only!  Certainly the comics were very well prepared – down to the smallest “ad lib” – but these sessions also highlighted to me the differences between speaking to groups and “facilitating” a group (both in purpose and aims).

If you explore the literature you will find that much of it refers specifically to Problem Based Learning (which I experienced as a medical student) and which has quite specific criteria for how groups function.  Despite the many years of PBL, studies on outcomes are still variable which demonstrates how difficult educational research is.  On the other hand, financial stringencies are moving some schools back to lectures, larger small groups or onto online options.  Given this, it would be good to know what we might be losing.

In immediate evaluations registrars in vocational training often rate small groups highly but this may also be because of the added value of interpersonal contact, debriefing, support and so forth, in addition to educational “effectiveness”.

Groups can serve many purposes

They aren’t generally for delivering information but they can function as tutorial groups following on from lectures (undergraduate model) but in post grad training they often function as a framework for case discussion, topic exploration or debriefing. They can be part of Flipped Classroom models. The dynamics of a small group can be used to enhance educational value.  As an educational method, the small group requires more listening and drawing out and sometimes it requires ad hoc changes in direction in response to group needs.  A small group may have a joint purpose and be more than the sum of its individual parts. The facilitator assists the group to achieve their purpose and often feels more responsibility for the development of each member of the group. In addition to the acquiring of information, there is a package of benefits – don’t underestimate the power of social interaction.

group chairs

How hard is it to run a good small group?

Some institutions seem to mistakenly believe that being a health professional automatically qualifies you to run groups but skills are required.  I have to confess that when I trained initially as a social worker, I opted for “casework” over “groupwork” as I much preferred the one-to-one interaction and that probably remains true.  However, that is in a therapeutic rather than educational context and small groups do appear to be powerful tools in medical education.  I find it a pleasure to observe educators with skills (natural or acquired) facilitating groups in a more effective way than I know I do myself.  Registrar comments easily identify what not to do when running a small group – be condescending, fail to contain domineering members, appear unprepared.  I have run groups at all levels and the good thing about registrar groups is that they are very motivated (although sometimes more critical) and generally have good background knowledge. You are often consolidating and applying knowledge rather than just passing it on.

What works best? 

Groups are easier if the members are known to you, as has often been the case in my experience.  It is more difficult if you are parachuted into a workshop situation and told to “facilitate” a random group. This is not really best practice.  Numbers are important and most of the literature agrees on somewhere between five and ten – with minimal hard evidence. Facilitator or not?  This depends very much on the topic and the level of experience of the members.   Registrars often like an educator to be there as a resource but if this is not possible then it is a good idea to have a clear structure for the discussion

Hints for running a small group

  • Set ground rules if need be (and the goals of the group discussion to avoid disappointed expectations)
  • Assign roles to encourage engagement (scribe, timekeeper, facilitator, resource finder)
  • Have a good structure with appropriate resources on hand
  • Utilise member skills if you know the background of your group members eg If you have a reproductive medicine topic defer to someone who has just done a year in obstetrics
  • You can use some similar methods to larger groups (ice breaker, split into pairs)
  • You can’t be an expert on everything and members should be thinking for themselves, so feed questions back to the group
  • Involve everyone. Look out for quiet members and use strategies to quieten the overly noisy contributors.  Have some prepared questions to direct to individuals.
  • If you utilise more senior learners to facilitate groups they need to feel they are also learning and not just being used.

waterfallSmall groups can have many side benefits.  They enable you to get to know students better, to flag those who are struggling in various ways and to encourage further specific learning.  However, these are often seen as intangible benefits when institutions consider the cost efficiency of various methods. It is a meaningful challenge to think how you might evaluate different educational methods.

Practice Based Small Group Learning (PBSGL)

I thought it worth mentioning this interesting approach which has been used for many years in Canada for continued professional development.  It has subsequently been adapted for use in the UK (and particularly Scotland).  It is an interesting way of organising nationwide CPD with some uniformity of topic and approach although it does involve fees.  It has also been implemented in GP Training in Scotland and I was able to sit in with one of these small registrar groups in Aberdeen a few years ago after prior discussions on email with the organisers.  It is obviously a relatively economical approach and I would think it would be very appropriate for more senior registrars.  It provides structured cases and resources on pre-determined topics.  If you are interested have a look at http://careers.bmj.com/careers/advice/view-article.html?id=20000765

I would love to see the Colleges consider this for CPD.

Evaluation – How do we know we are doing a good job?

There are multiple approaches to evaluation and many are related to predicting outcomes in training or to issues of Quality Improvement.  As professionals, medical educators aim to do their job well and benefit from evaluating what they do.  At a higher level, Program Evaluation is an important issue.  At all levels, evaluation helps you decide where to focus energy and resources and when to change or develop new approaches. It also prevents you from becoming stale. However, it needs curiosity, access to the data, expertise in interpreting it and a commitment to acting on it and there needs to be organisational support.

Doing it better next time

So, at the micro level, I get asked to give a lecture on a particular topic, to run a small group or to produce some practice quiz questions for exam preparation.  How do I know if I do it well or even adequately?  How can I know how to do it better next time?

There are many models of evaluation, particularly at higher levels of program evaluation (if you are keen you could look at AMEE guides 27 and 29 or this http://europepmc.org/articles/PMC3184904 or https://www.researchgate.net/publication/49798288_BEME_Guide_No_1_Best_Evidence_Medical_Education ).  They include the straightforward Kirkpatrick hierarchy (a good example of how a 1950’s PhD thesis in industry went a long way) which places learner satisfaction at the bottom, followed by increased knowledge then behaviour in the workplace and, finally, impact on society – or health of the population in our context.  There are very few studies able to look at the final level as you can imagine.

Some methods of evaluation

The simplest evaluation is a tick box Likert Scale of learner satisfaction.  Even this has variable usefulness depending on the way questions are structured, the response rate of the survey and the timeliness of the feedback.  The conclusions drawn from a survey sent out two weeks after the event with a response rate of 20% are unlikely to be very valid.  Another issue with learner satisfaction is the difference between measuring the presenter’s performance versus the educational utility of the session.  I well recall a workshop speaker who got very high ratings and who was a “brilliant speaker” but none of the learners could list anything that they had learnt that was relevant to their practice.  You could try to relate the questions to required “learning objectives” but these can sometimes sound rather formulaic or generic.  It is certainly best if the objectives are the same as those intended by the presenter and they should be geared towards what you actually intended to happen as a result of the session. When evaluating you need to be clear about your question. What do you want to know?

reflectionIf you add free comments to the ratings with a request for constructive suggestions you are likely to get a higher quality response and one that may influence future sessions.  It is also possible to ask reflective questions at the end of a semester about what learners recall as the main learning points of a session.  After all we are really wanting education that sticks!

Another crucial form of evaluation is review with your peers. Ask a colleague to sit in if this is not a routine happening in your context.  Feedback from informed colleagues is very helpful because we can all improve how we do things.  It is hard to be self-critical when you have poured a large amount of effort into preparing a session and outside eyes may see things we cannot.

To progress up the hierarchy you could administer a relevant knowledge test at a point down the track or ask supervisors a couple of pertinent questions about the relevant area of practice.

Trying out something new

If you want to try an innovative education method or implement something you heard at a conference it is good practice to build in some evaluation so that you can have a hint as to whether the change was worth making.

An example

A couple of years ago I decided to change my Dermatology and Aged Care sessions into what is called Flipped Classroom so I put my powerpoint presentations and a pre workshop quiz online as pre-viewing for registrars.  I then wrote several detailed discussion cases with facilitator notes for discussion in small groups.  I did a similar style with a Multmorbidity session where I turned a presentation into several short videos with voice over and wrote several cases to be worked through at the workshop.

I wanted to compare these with the established method so I compared the ratings to those of the previous year’s lecture session (the learning objectives were very similar).  Bear in mind there is always the problem of these being different cohorts.  I also asked specific questions about the usefulness of the quiz and the small group sessions and checked on how many registrars had accessed the online resources prior to the session.  It was interesting to me that the quiz and the small groups were rated as very useful and the new session had slightly higher ratings in the achievement of learning objectives.  Prior access to the online material made little difference to the ratings.  I also assessed confidence levels at different points in subsequent terms. In an earlier trial of a new method of teaching I also assessed knowledge levels.

Education research is often “action research”.  There is much you can’t control and you just do the best you can. However, if you read up on the theory, discuss it with colleagues and see changes made in practice then it all contributes to your professional development.  Sharing it with colleagues at a workshop adds further value.

warningSome warnings

Sometimes evaluations are done just because they are required to tick a box and sometimes we measure only what is easy to measure.  Feedback needs to be collected and reviewed in a timely fashion so that relevant changes can be made and it is not just a paper exercise. There is no point having the best evaluation process if future sessions are planned and prepared without reference to the feedback.  It would be good if we applied some systematic evaluation to new online learning methodologies and didn’t just assume they must be better!

Evaluation is integral to the Medical Educator role

A readable article on the multiple roles of The Good Teacher is found in AMEE guide number 20 at http://njms.rutgers.edu/education/office_education/community_preceptorship/documents/TheGoodTeacher.pdf

Evaluation is a crucial part of the educator role and the educator’s role is diminished and the usefulness of any evaluation is curtailed when the two (education and evaluation) are separated.  Many things have an influence on training outcomes including selection into training, the content and assessment of training and the processes and rules around training. As an educator you may have increasingly less influence over decisions about selection processes and even over the content of the syllabus.  However, you may still have some say in what happens during training.  I would suggest that the less influence educators have in any of these decisions the less engaged they are likely to be.

At the level of program evaluation by funders, these tasks are more likely to be outsourced to external consultants with a consequent limitation in the nature of the questions asked, a restriction in the data utilised and conclusions which are less useful.  “Statistically significant” results may be educationally irrelevant in your particular context..  Our challenge is to evaluate in a way which is both useful and valid and helps to advance our understanding as a community of educators.  A well thought out study is worth presenting or publishing.