Monthly Archives: June 2016

Appropriate Assessment – Entrustable Professional Activities

Rather than start with basic theories about assessment I thought I would discuss a relatively new approach, Entrustable Professional Activities (EPAs), about which I became quite excited when I first heard about it – at an AMEE conference in Lyon in 2012. One of the appeals was that they are Work- Based-Assessments (therefore closest to the top of the well-known Miller’s Pyramid) and try to address issues of competence.  It seemed to me that they were eminently appropriate to general practice because they assess complex activities – MCQs are good for other things.

History of EPAs

eroded rocksPublications on EPAs go back to 2005 and presumably were being developed prior to that – in the Netherlands, then the US and Canada. Back in 2012 it appeared that they had been developed and used largely in the hospital context – beginning in obstetrics and paediatrics and some physician training. On my return from the conference I explored the literature and discovered that the RANZCP had already developed EPAs for psychiatry training and we met with one of the psychiatrists involved in their implementation.  Whilst we were developing our model, a 2013 paper was published described the development of EPAs in a US family medicine residency and an AMEE workshop by Karen Schultz and Jane Griffiths reported on implementing them in Canadian GP training across a group of four universities. All these contexts are very different as are their implementations.  Even their final number of EPAs varies greatly from sixteen to seventy two.

What are EPAs?

An EPA can be defined as a discrete unit of delegated work / professional task which is core to the particular discipline. EPAs are observable activities which can be judged.  They are chosen on the basis of being important in the specialty and having implications for patient care. They relate to competencies but they involve global judgment rather than atomised checklists.  So, “communication” is not an EPA because it is a specific skill/competency rather than a professional task.  It is involved in many professional tasks.  On the other hand “being able to correctly diagnose depression, assess the patient and initiate appropriate treatment” is a specific professional task, relevant to general practice, which requires several competencies and a collection of knowledge, attitudes and skills.  The concept of trust is central to EPAs and the choice and wording of EPAs are important.  This is a structured assessment by someone who knows the trainee in an ongoing way – in contrast to the snapshot of the Teaching Visit or MiniCEX.

The basis of the assessment / judgment

Competencies have always been a problem for GP training given the more complex skills required which can’t always be reduced to individual competencies – and some relevant competencies are difficult to measure. Just saying you are measuring a competency doesn’t make that the case. In a colloquial sense the EPA works at the intuitive level of “would you send your granny to this doctor” but it then attempts to articulate that intuitive judgement. It is analogous to clinical diagnostic judgments. The process relies on the expertise of supervisors and they can draw on various sources of data (observation, records, feedback etc).  The involvement of supervisors is also why it is important not to overload them with irrelevant or superfluous EPAs.

The anchor for assessment relates to level of supervision and this fits in nicely with current RACGP standards. The question is “can this registrar be entrusted to perform this professional task at this particular level of supervision (defined)?”  A specified number of EPAs can be assessed across training and can be blueprinted against curricula if wished.  Given the breadth of GP skills, and the variation in training contexts, it does not make sense for them to be linked to milestones except perhaps for the most basic ones relevant to ubiquitous conditions or those with high significance for patient safety eg “the registrar can be trusted to manage the acute presentation of an unwell young child who presents with a fever.”

Why use EPAs

shell flash zoomIn hospital training it is more possible to observe all activities by trainees prior to entrustment. In general practice there is much unobserved interaction with patients from the outset.  It is important to know if registrars are safe.  The EPA process attempts to articulate what was unspoken, what supervisors already do and is intuitively appealing in the Australian supervisory situation.  It happens cumulatively over time, not a one-off assessment.  As an article by ten Cate notes, EPAs attempt to go beyond competencies and tick-boxes. General Practice is a discipline that requires integration of multiple competencies.

If you want to know what has been done in the Australian context…

Implementation has varied internationally. EPAs have been used as a basis for a new curriculum (a US example): they have been laboriously mapped to all competencies (US); they have been developed from and integrated with pre-existing “field notes” in Canada (Karen Schultz and Jane Griffiths kindly shared their experiences with us on Skype conversations.  If you are going to AMEE they would be happy to chat). We chose a representative approach (we already have a curriculum) and decided specific EPAs for assessment should be chosen a) where the content and context suits the method; b) to represent the scope of practice; and c) with contribution from supervisors and educators.  We came up with 11 EPAs which were intended to be part of an overall program of assessment – and I will mention Programmatic Assessment in a later post.  They included guidelines for supervisors. Our process of consultation, workshops, implementation and evaluation was described at a 2014 GPET Research Workshop and presented at a 2015 AMEE conference https://amee.org/getattachment/conferences/AMEE-2015/AMEE-2015-App-Data/8G-Short-Communications.pdf

A summary can be seen on the poster presented at the 2014 GPET conference – see  http://www.mededpurls.com/pdf/EPAposterAug14.pdf

If you want to read more about EPAs – There are a couple of readable articles by Olle ten Cate who introduced the concept

Trust, competence, and the supervisor’s role in postgraduate training. BMJ. 2006 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1592396/

Nuts and Bolts of Entrustable Professional Activities J Grad Med Educ.2013 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3613304/

The Australian psychiatry program can be found here for an interesting comparison. It is of course a longer training program. https://www.ranzcp.org/Pre-Fellowship/2012-Fellowship-Program/Assessment-overview/Entrustable-Professional-Activities.aspx

Maybe we will see these being implemented more broadly in the future.

 

Observation – the Teaching Visit

For both assessment and effective feedback you can’t beat direct observation (I’ll look at associated assessment issues later).  Those who train in hospitals are not short of opportunities for observation with ward rounds, teams and open-plan spaces.  However, tools still had to be introduced in an effort to facilitate feedback in the busy clinical environment.  GP training presents a different challenge.  In the worst scenario the registrar lands in a practice and begins to practice, unobserved, behind the closed door of the consulting room.  This is probably a hangover from the days when doctors were assumed to be ready to enter general practice straight from hospital experience.  On the other hand, GP training currently has a significant advantage in its built-in requirement for Clinical Teaching Visits (CTVs):  several hours dedicated to direct observation of clinical activity within the practice setting.  The practical process for doing this is outlined by the training organisations.  I will cover the value of the supervisor “sitting in” in a later post.

The luxury of the CTV

The Clinical Teaching Visit does not merely provide a snapshot of diagnostic acumen in a particular presentation, an isolated management skill or some procedural competency. It provides the opportunity to observe and provide immediate feedback on the full breadth of GP skills: communication, diagnosis, prevention, management strategies, follow-up, test-ordering, prescribing, safety-netting, awareness of community resources and functioning within a team.  It is also from an “external” perspective.”  It provides the opportunity to act on (and practice) the feedback in the subsequent cobeachnsultations (eg “how about you try to listen for a minute at the start of the next consultation.”).   If used to its potential the CTV also enables evaluation of the practice and supervisory environment which are crucial to quality education and training.

You get to see the bigger picture.

A closer look

There are several useful tools to focus the mind when registrars are being observed by either supervisor or teaching visitor. These tend to categorise and provide a rating scale for some of the micro-skills of consulting.  There are educational reasons for doing CTVs which address broader issues.  It can be an opportunity to develop the productive mentoring relationship between Educator and Registrar.  It is an opportunity for supporting, guiding, challenging and encouraging (not just assessing).  If we fail to do these things we are probably providing a sub-optimal teaching visit.  It is an individualised experience where the skilled visitor is responsive to the trainee’s context, competence and potential.

Communication skills

Feedback on communication skills includes the usual advice about listening, how to ask questions and avoiding jargon in explanations but the observer can also often identify non-verbal communication, tone of voice or habitual use of words and feed back in a timely fashion while it is fresh in the minds of both.  In the CTV, the observer can note, more holistically, whether the doctor is able to adapt their approach to different patients.

Clinical knowledge

featherThe breadth of knowledge addressed is often limited by the patients seen although the range, over four to eight patients in a session, can be significant. However, identifying, within the specific context, a few discrete areas for further study is powerful and can include suggestions for self-directed learning.

Practical consultation skills

shell coloursThese are often the hidden gems in a teaching visit where an experienced GP is able to proffer practical suggestions: “Here’s a tip for writing your computer notes ….”, “You could try this, to manage several presenting problems……”, “Here’s a way to do a concise neuro exam…” These are often remembered long after the visit – and even after being Fellowed.

Encouragement

bird coupleDon’t underestimate the value of positive collegial statements such as “I like the way you did…”, “I learned something from you today when you said….”, “You are a great family doctor”.

The written report

The written report is able to summarise the patterns revealed in successive consultations and log the registrar’s progress in competence over several visits. There is not much evidence on the effectiveness  of written (as opposed to verbal) feedback.  Qualitative free text is often valued higher than ratings.  Teaching Visitors may write reports differently (depending on how constrained they are by the template).  sandHaving read hundreds of reports, I can see different benefits in different approaches.  Some couch feedback around management of the specific patient case whereas others summarise general competencies in the conclusion.  Some give snippets of practical advice and others provide registrars with numerous up-to-date links to relevant resources on the internet.  In the best case scenario, these differences would be modified to suit the learner being observed.  It is added value if the recorded document is one to which the registrar might usefully refer when relevant. It is probably important to go back to basics and ask yourself why and for whom this report is being written: for the supervisor, for the learner, for the training organisation, for assessment, for legal reasons etc.  Purpose should inform method (and direct any evaluation).

What works in CTVs?

Registrars often comment, at the end of training, that the CTVs were the most useful part of training. We can speculate about what works in teaching visits but it has also been looked at.  If you google“How useful are clinical teaching visits to registrars and supervisors” you will find a presentation from a previous General Practice Education and Training (GPET) conference.  This study surveyed and interviewed supervisors and registrars (in all terms) over a six month period.  95% of registrars reported they were encouraged to identify areas of clinical knowledge and consultation skills for improvement .  Only 13% of registrars found the feedback surprising or challenging. 94% of supervisors felt they were provided with useful information about their registrar’s skills and the majority reported intentions to modify areas of teaching and supervision. The study also looked at practical issues of the organisation of visits and came up with suggestions (from registrars and supervisors) for overall improvements which included:  some variability of visitors, having supervisors also sit in, more communication with the supervisor during the visit, more time for discussion, and attention to timing (not late in the term).  It was noted that the report should only contain feedback discussed at the visit and could link clinical feedback to articles for further study.

Some thoughts on large group presentations

The “lecture” has had a chequered history in terms of use and popularity. Here’s where I confess that I went to the University of Newcastle in its early days.  It was committed to Problem Based Learning to such a degree that even the few lectures that were programmed had to be called Fixed Resource Sessions instead. How we craved a bit of “spoon feeding” in spite of educational theory!

Historically lectures can be seen as a means of transmitting oral tradition. In medicine, teaching was initially by apprenticeship and then “modern” lectures were introduced in the mid nineteenth century.  By the early 20th century there were concerns that lectures were too passive and were just a means of memorisation and cramming of knowledge.  It is still a staple of many undergraduate courses which vary between countries.  Some older publications talk a fair bit about lectures and note taking but the latter is barely relevant in these days of online PowerPoints and students taking photos of slides when needed.  However, PowerPoint itself is already an object of criticism, particularly in terms of the tendency to merely read from the slides.

Are lectures an effective method in medical education and how do they compare to other methods? Thinking educationally, it is useful to step back first and to consider the part of the curriculum which is being addressed and to choose your method according to its appropriateness for the content.  There is some evidence that effectiveness may be best overall with mixed methods.

forestLectures can efficiently deliver large amounts of information to large groups.  Effectiveness is another issue and there are obviously good and bad lectures.  There is a readable summary in Matheson, C. The educational value and effectiveness of lectures The Clinical Teacher Vol 5 Issue 4 Dec 2008 pp218-21

The general criticism of lectures is that it is a passive form of learning, most students’ attention wanes after twenty minutes or so and they are unlikely to remember much of it if this is checked later.  I have another confession. I actually enjoy listening to an interesting lecture and I have a long attention span – but I still don’t remember the content a few months later.  I’m often surprised to read notes if I discover them at a later date.

The standard lecture is generally an expert speaking on a prescribed topic and it is a favourite of many students and registrars. Within GP training there is a relevant debate about whether the presenters should be other specialists or GPs with expertise in the area.  This often raises issues of credibility (but also relevance) which can impact on how a lecture is rated.

There are now many variations on the theme of lectures such as online lectures in Massive Open Online Courses (MOOCS) or in the form of video or PowerPoint with voice-over (for use in “flipped classrooms”).

Traditional lectures have also been compared to interactive lectures. The latter aim to engage the learners at various points during the lecture.  Lectures are rarely presented without some sort of audio-visual support these days (PowerPoints are ubiquitous, appearing even at weddings and funerals). Some hints for this:

  • There are lots of sources of advice for how to present information on PowerPoint slides and there are strong opinions about whether to talk around pictures versus the usual bullet points. Personally, as an audience member, I quite like to be able to read the bullet points myself during a lecture but different people have different preferred learning styles
  • The one unarguable advice would be to ensure all the audience can read what you put on the slide so avoid busy tables scanned from publications
  • One of the hardest things is to pace the presentation correctly and to feel confident enough to take sufficient time.
  • There are different ways of structuring a presentation and it may depend on the topic. If cases are included as illustrations it is a good idea to not leave them to the end. It can almost be guaranteed that you won’t get through them all and learners will feel they have missed out on something crucial – even if they haven’t.
  • It is crucial to be appropriate for the audience (and student feedback will let you know). Delivering an under-graduate lecture to post-graduate doctors can be perceived as condescending and playing numerous videos of new surgical or imaging techniques to a bunch of GPs may be seen as less than relevant.
  • It’s often helpful to flag the aims or learning objectives at the start (so they know where you are all heading) and to highlight a couple of take home messages at the end. You can try asking the audience at the end what they felt the take home message was for them
  • Some pre-reading, pre-viewing or pre-session quiz can increase the value of a lecture but be prepared for 50% participation or less in on-line resources. This may not be as important as you think. The non-participants may catch up later, may know it anyway or may be the ones who would have checked their Facebook through the standard lecture.
  • Be aware that a good presentation requires a significant amount of time for preparation and even updating clinical content for an existing presentation is not always a quick task.

Ways to break up a long lecture include

  • short videos (but not if the technology is unreliable or if you don’t have a support person to troubleshoot)
  • occasional cartoons
  • At points during a lecture you can divide a large group into pairs for a couple of minutes to answer a question or discuss something. This has the added value of encouraging interaction with peers and doesn’t involve too much chair moving
  • If you are confident in the topic area you could begin by collecting, and writing down, a list of questions collected from the audience and tick them off visibly as they are covered by your pre-prepared material.
  • If you have access to electronic key pads you can run quick multiple-choice questions for the participants. They will be happier to volunteer opinions when anonymous and it will give everyone an often surprising visual on the group’s opinions or knowledge. This can be especially useful in ethical topics for instance.
  • If you are feeling confident or brave you can try lots of different activities – often learned when watching your peers presenting.

There are a few summary points on lectures in this post http://www.mededworld.org/reflections/reflection-items/September-2014/Lecture-in-Medical-Education.aspx

Feedback is important but not easy to interpret

Student feedback is important but often over-rated. Constructive and thoughtful comments are much more useful (for implementing improvements) than rating scales.  The construction of the most useful feedback questions is challenging.  Do you concentrate on Learning Objectives (and what do you expect them to have achieved by the end of the lecture?) or do you ask them how useful it was?  Did your talk have specific goals such as raising curiosity?  In rating the presenter (or even the presentation) the learner is often rating the presenter’s performance and this is different to educational effectiveness.  Not all presenters are natural performers but don’t be too discouraged if this is you.  Registrars sometimes comment that a presentation was hugely enjoyable but not all that relevant to their learning. Enthusiastic feedback is comforting but the learning outcome is more important and skills can be learnt.  But style is not necessarily substance!  Have a look at this short but amusing video http://www.thenewsminute.com/article/ted-talks-spoof-how-be-thought-leader-and-get-standing-ovations-saying-nothing-all-44739

The evaluative process is importantgum

Peer evaluation is useful and this can be formal or informal. Being able to take the time to sit in on the sessions run by other educators is a very valuable part of professional development.  There is a lot of combined wisdom in a group of educators and after timely evaluation by a group of those involved (and review of feedback), the important next step is to act on the observations and make it better next time.

Feedback – what works?

gap

Mind the gap

A gap is frequently noted between what the teacher “gives” and what the learner appears to “receive” or act upon and this is characterised as the effectiveness of the feedback . Ramaprasad’s original (1983) educational definition of feedback suggested that the process itself was not “feedback” unless translated into action. It was inherent in the definition. This view demonstrates the concept of a “loop” which is present in some of the original uses of the term in other disciplines including  economics, electronics and biological systems to name but a few.  It was appropriated as a metaphor into education and like many metaphors developed an existence of its own (and became mandatory).  Feedback in a biological system describes an automatic ongoing process but in medical education it becomes prescriptive.  However, you can sometimes identify with a pituitary churning out chemicals toward an unresponsive thyroid or ovary.  On the other hand, maybe some learners see some educational hypopituitarism out there!

Feedback is more often used to describe the content part of a one way (and hierarchical) process of interaction and there are certainly many organisations these days who claim to “welcome your feedback” and yet have no intention of acting on it.  Similarly educational organisations compel their staff to collect student feedback but how often does this affect the program?  Job done, box ticked.  Education itself sometimes morphs into just information thrown at the learner or perhaps leaflets dropped from a drone and thereby avoiding all human interaction.

Is there any evidence about what works to close the loop? Sometimes this is in short supply and it is hard to distinguish evidence from opinion, whether expert or otherwise.  However, I guess if dropping feedback from the air resulted in the required improvement in performance then we would say it was effective so we shouldn’t  necessarily just go with what feels good or like “common sense”.

What helps the learner to catch what is thrown and to run with it?

There are numerous reviews of the effects of feedback in medical education. There is very limited evidence so far regarding the effectiveness of Work Based Assessments (WBAs) which are now widely mandated.  It would be fair to summarise by saying that the unimpressiveness and lack of agreement in the area are likely due to the fact that the educational context is so complex. One writer went so far as to ask whether, given the complexity, feedback can ever be effective.

A slightly different perspective from the evidence 

A few points that strike me, from the reviews mentioned below, include the importance of slightly less tangible aspects:

  • A culture of feedback
  • Relationship – its quality and continuity
  • The credible source for the feedback – it is worth reflecting on this
  • The receiver of the feedback
  • Adaptability – sometimes the learner needs support, sometimes facilitation, sometimes correction. What works for teaching procedures may not work for communication skills and so forth.

Interesting Links 

A “respectful learning environment” is one of the “Twelve tips for giving feedback effectively in the clinical environment” found here http://sse.umontreal.ca/apprentissage/documentation/Ramani_KracKov_2012_Tips_Feedback.pdf,  An article by Archer in Medical Education suggests that the practical models for delivering feedback are reductionist and “only feedback seen along a learning continuum within a culture of feedback is likely to be effective.” It’s worth reading. http://www.pubfacts.com/detail/20078761/State-of-the-science-in-health-professional-education:-effective-feedback.  The BEME review concluded that “the most effective feedback is provided by a credible, authoritative source over a number of years”. This was reiterated in other reviews http://www.bemecollaboration.org/Published+Reviews/BEME+Guide+No+7/

I find it interesting that so many articles begin with the premise that feedback somehow needs to happen despite ubiquitously less-than-ideal circumstances (the teacher, the busyness, the system etc). Hence “Teaching on the Run”, the “One Minute Preceptor” and the “Mini”CEX.  I will write a post later relating to the Clinical Teaching Visit which is a more dedicated educational activity with its rich clinical resource and opportunity for feedback.  It is also where feedback and assessment begin to intersect.

How feedback is done is probably more important than which tool is used. The systematic review by Saedon commented that “If WBAs are simply used as a box-ticking exercise, without sufficient emphasis on feedback, then any gains will be limited.” However, learners tended to report that written feedback was more helpful than number ratings. http://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-12-25 Even Gen Y report preferring face-to-face interaction rather than the computer screen.

Where next?

Perhaps we should be spending more effort on ensuring that the context is conducive to good feedback rather than just concentrating on the tools and strategies for the isolated feedbducks 2ack episode. This would mean (in our relatively short GP Training) making the most of relationships with supervisors and medical educators.  Perhaps the skills and preparation for receiving feedback are just as crucial as the mandated  “Learning Plan,”  particularly in the light of the lack of evidence of the validity of self-assessment.  And, finally, perhaps we should have a go at evaluating some of what we do.

Resources for professional development – where to start looking

at the beachI was going to probe a bit further into “feedback” or jump into another topic but I decided, instead, to make some early suggestions about where to go for ideas on medical education. That way you can answer your own questions in your own time!

AMEE 

The Association for Medical Education in Europe (AMEE) is a great resource https://www.amee.org/home. It organises a conference in Europe every year.  This generally attracts several thousand people – with a contingent from Australia. Like most medical education meetings there is a big emphasis on undergraduate education but as there are lots of choices of concurrent sessions it’s always possible to find something relevant.  They also run Mededworld Webinars on education topics.

The headquarters of AMEE are in Dundee, Scotland. If you become a member you also receive the journal Medical Teacher.  AMEE publishes multiple medical education guides on specific topics. https://www.amee.org/publications/amee-guides. There are over ninety of these and range from core topics to the slightly more esoteric but they can be purchased individually (cheaper in the electronic version).   AMEE also publishes BEME guides (Best Evidence in Medical Education) which are reviews of evidence in particular areas.  I recall going to a session a few years ago on their BEME of portfolios in post graduate training.  This influenced our program planning.

AMEE also runs a distance medical education course called Essentials in Medical Education (ESME).  These modules can contribute towards a later Diploma or Masters.  ME colleagues who completed this course found it added value to do it at the same time as a group of other colleagues.  Ronald Harden runs this course and has written a textbook called Essential Skills for a Medical Teacher.

AMEE 2016 conference is in Barcelona and has a pre-conference 2 day summit on Competency Based Education which would be well worth attending. Generally, if you could get to one international conference this would be it. Sadly it clashes, yet again, with the dates decided for GPTEC (our main local conference) which is on the Gold Coast this year  http://gptec2016.com.au/ .  In Australia there is also the Australian and New Zealand Association for Health Professional Educators (ANZAHPE) http://www.anzahpe.org/ whose conference is in Adelaide in July 2017.  This group has a broader remit and includes allied health.

Three more   journals

The UK Association for the Study of Medical Education (ASME) publishes the journal Medical Education. I’ve read many useful articles in this but they do tend towards the theoretical at times and are not specific for GP training.  ASME also has another journal – The Clinical Teacher.  The most readable occasional articles I have found have been in Education for Primary Care, however, the subscription is significant.

Texts on medical education

 ASME has published a 2010 book called Understanding Medical Education.  This fairly lengthy textbook is quite reasonably priced and is available in a kindle version.  At a similar price for a more succinct BMJ publication you could look at the ABC of Learning and Teaching in Medicine, although I think it is only in hard copy as a book.  It is a similar presentation to their ABC books on clinical topics. However, the fourteen individual articles are available online in the BMJ via PubMed  https://annietv600.wordpress.com/2006/05/13/the-abc-of-learning-and-teaching-in-medicine-bmj-series-2003/

A specifically Australian book is the fairly concise Practice-Based Teaching by Richard Hays.

If you work for an educational organisation it is possible that they might invest in some of the modestly priced resources in order to support the professional development of education staff. If you have a university connection the journals would be accessible in their libraries. My suggestion would be to use individual articles as the starting point for professional discussion with a group of interested colleagues (maybe electronically if you all work different days) and to dip into the evidence-based literature when education policy changes are mooted.

There are lots of other resources out there, especially when you start exploring online. There is increasing use of social media particularly among those active in emergency medicine / critical care areas (see SMACC conferences). 3  https://foam4gp.com/ has some educational material and exam prep amongst the clinical content. Genevieve Yates (medical educator and GP) has a well established blog with a variety of content including educational posts https://genevieveyates.com. There are a limited number of podcasts in I-tunes from Medical Education and Clinical Teacher and presumably there will be more of this sort of resource in the future.  The Australian Medical Educator Network (AMEN) has just set up a blog and will soon be conducting some webinars.

Has anyone else found anything particularly useful?

Feedback 101 – “You’re going great!”

grevilliaIs that sufficient and does it matter? As educators we like to be positive.  One of the most important things we do in education is to provide feedback. In fact, at the coal face of medical education, it is probably one of the first things we do.  We all have our own ways of doing this – sometimes well and sometimes badly.  When asked about their training, GP registrars often state that feedback is what they don’t get enough of and that they want it to be more useful.  They want to know how they are going.

There is a Train the Trainer manual which was produced by a number of Medical Educators in 2008. It is now in need of updating.  However, it lists two ME competencies which are still relevant

  • provide constructive feedback that is learner-centred and balanced
  • guide learners to self-reflect on their performance

Why feedback?

 “Feedback”, as we understand it, is obviously something that was being done long before the term was invented. Presumably, mediaeval stonemasons gave feedback to their apprentices who were aware of the finished cathedrals.  The stonemasons demonstrated how stones should be cut and made comments on the attempts made by the apprentices in order to improve that performance.  The medical education field has taken on the concept of “feedback” as addressing “the gap between the actual level of displayed learning and the reference level of learning” (Ramaprasad, 1983).  Even these two concepts (performance and required standard) are tricky to measure.  However, a considerable amount of effort has been put into describing the process.

Most concise, practical accounts of feedback in medical education start (and sometimes end) with what is known as the feedback sandwich: say something positive, then something “constructive”/critical, then finish with something positive. Another early framework follows Pendleton’s Rules (1984): the learner is asked what was done well, the observer then notes what was done well, the learner states what could be improved and the observer then notes what could be improved. The observer might also ask the learner how it could be improved and some sort of plan is made to address this.

Many supervisors and educators do all of this intuitively and it is a pleasure to watch. Sometimes you can go by the rules and they gradually become second nature.  The important points are:

  • be supportive while providing
  • constructive feedback and
  • keeping an eye on the appropriate endpoint / standard

As educators we want to make every part of the education process as useful, relevant and effective as possible and feedback can help to do this.  Effective feedback, in general, should be:

  • Timely – so that it can be acted on whilst the learner still recalls what they did
  • Constructive, non-judgmental and focussed on observed behaviour
  • Individualised so that it is relevant to the individual
  • Credible – the learner must trust in the validity of the feedback
  • Specific and practical – enough information to suggest future learning but not so much that it risks overwhelming the learner (bite sized pieces)
  • Relevant to assessment outcomes

These are common “prescriptions” for feedback and I suspect I have sometimes demonstrated the opposite characteristics!  They have face validity and perhaps some have evidence behind them – might pursue some of these thoughts later.  Sometimes they are just apparent “common sense” that someone had the initiative to put into bullet points on a power point at some point in MedEd history.

Some PURLS

The most concise summary of feedback is in tip 10 of the Australian Teaching on the Run (TOTR) series. Courses of TOTR are often run in hospitals and address many aspects of clinical teaching.http://www.meddent.uwa.edu.au/teaching/on-the-run/tips/10

The following is a longer article (from the London Deanery, UK) with a few more suggestions on how to do it and the words to use. It also describes the barriers to giving effective feedback. http://www.faculty.londondeanery.ac.uk/other-resources/files/BJHM_%20Giving%20effective%20feedback.pdf

The Clinical Teaching Visit in Australian GP Training offers a great opportunity for quality feedback within the context of the whole consultation. This can address broader issues than just examination and clinical skills (maybe more about this later).

A common comment from registrars is that they wish supervisors would give more constructive feedback than just “you’re going really well.” – even if it is well meant.