Tag Archives: feedback

Confound it! How meaningful are evaluations?

We generally state that we should evaluate our teaching. It’s a good thing.  But how meaningful are some of these exercises?

Just recently I saw a link on twitter to this delightful article entitled Availability of cookies during an academic course session affects evaluation of teaching” https://www.ncbi.nlm.nih.gov/pubmed/29956364  If this had been published in December in the BMJ I might have thought it was the tongue-in-cheek Christmas issue.  Basically it reported that the provision of chocolate cookies during a session in an emergency medicine course affected the evaluations of the course. Those provided with cookies evaluated the teachers better and even considered the course material to be better and these differences were reported as statistically significant.  Apart from making you think it might be worth investing in some chocolate chip cookies, what other lessons do you draw from this fairly limited study?

There were a few a few issues raised by this for me. Firstly it made me reflect on the meaningfulness of statistically significant differences in the real world.  Secondly it made me quite concerned for those staff in organisations such as universities where weight is sometimes placed on such scores when staff are being judged.  This is because, thirdly, it brought to mind the possible multiple confounders in attempts at evaluation.

When considering your evaluation methods, consider various issues: how timely is it if it is participant feedback? If you intend to measure something further down the track such as knowledge acquisition, at what point is this best done?  Question the validity and reliability of the tools being used.  Consider using multiple methods.  Consider how you might measure the “hard to measure” outcomes.  If programs (rather than just individual sessions) are being evaluated then ratings and judgments should be sourced more broadly.  I will not go into all the methods here (there is a good section in Understanding Medical Education: Evidence, Theory and Practice ed. T Swanwick) but the article above certainly raised points for discussion.

Surprisingly, student ratings have dominated evaluation of teaching for decades. If evaluations are in an acceptable range, organisations may unwittingly rest on their laurels and assume there is no need for change.  Complacency can limit quality improvement when there is an acceptance that the required boxes have been ticked but where the content of the boxes is not questioned more deeply.

More meaningful quality improvement may be achieved if evaluation methods are scrutinised more closely in terms of the relevant outcomes. Outcomes that are longer term than immediate participant satisfaction could be more pertinent and some quality measures may be necessary over and above tick box responses.  The bottom line is, consider the possible confounders and question how meaningful your evaluation process is.

Written reports – a lost art?

Despite the date, this is not a Xmas newsletter (that particular seasonal art) or even a critique of Xmas newsletters – but season’s greetings to all anyway. This post is a look at written reports of a different kind – but often also summarising events in an overly positive way  with the more average truths slipping through the cracks unremarked.  Similarly, Xmas letters can become a bit impersonal and lose their impact.  I guess it all depends on the purpose of the exercise.

A lot gets written (and presented) about how to give effective (verbal)feedback but there is much less space given to considering how to put this feedback into optimal written form that might serve a useful purpose. Only last week a teaching visitor suggested (after attending a workshop for Teaching Visitors) that she would appreciate some suggestions on writing a good report.

There’s not much point writing a literary masterpiece if nobody reads it and no point agonising over how to diplomatically state some unwelcome truths if it all disappears into the ether.  There are several purposes and destinations of such reports.  They may be primarily intended for the learner, or for the body that oversees training (as part of an in-training assessment framework or a way of satisfying accreditors).  We write our Clinical Teaching visit reports assuming that the learner reads them and perhaps their supervisor skims them.  The supervisor may note that we have observed the same things as they have during their observation sessions. We say such things as “would benefit from seeing more complex patients” and hope they do something about it.  In the past I have known conscientious registrars to re-visit their reports prior to exams – but the report needs, therefore, to be worth re-visiting and contain content that informs the reader.  A list of “strengths” and “weaknesses” may be vaguely helpful but it would be interesting to know what both learners and supervisors do with Likert type scores on multiple consultations skills.

It is possible of course that some organisations view the written report as legal evidence to justify later decision-making regarding competency, rather than a useful formative process. This 2011 report https://www.mja.com.au/journal/2011/195/7/review-prevocational-medical-trainee-assessment-new-south-wales  noted that supervisors tended to routinely grade residents as at or above the expected level and that As currently used by trainees and supervisors, the assessment forms may underreport trainee underperformance, do not discriminate strongly between different levels of performance of trainees …. and do not provide trainees with enough specific feedback to guide their professional development.” It questioned “What does this actually say about their developing competency? If a trainee does a core medical, surgical or emergency term in Term 1, performing “at expected level” indicates a lower level of performance than if the term was completed in Term 5. The phrases “at expected level” or “above expected level” do not indicate a specific level of competence”.

Although, as noted, there is little in the literature (compared to giving verbal feedback) there is an article on “Twelve tips for completing quality in-training evaluation reports” https://www.ncbi.nlm.nih.gov/pubmed/24986650  – although this is directed more at the end of term evaluations done by the ongoing supervisor rather than a one-off report done by a CT visitor.  This article notes that the more recent literature emphasises the importance of qualitative assessments (as opposed to concern about the reliability of the assigned ratings) and the focus now is on improving the quality of the written, narrative comments.  What you say is important. The article suggests completing the comments section prior to the ratings section in order to avoid the tendency to rate all components the same (eg all 4/5).  It is important that the feedback form you use enables you to provide such feedback and also has meaningful anchors on the rating scale.

A study looking at the quality of written feedback noted consistent differences between trainee-trainer pairs in the nature of comments which suggested that feedback quality was determined not so much by the instrument as by the users. http://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-12-97

So how can our written feedback on observed consultations be most meaningful? I guess by this I mean that the learner hears what is said, is able to relate it to what they did and is able to recall it in order to improve future practice.  The supervisor or educator should experience a document that enlightens them about the learner’s performance and progress in a way that informs future supervision / teaching / clinical experiences / remediation.  In some instances it will contribute concisely to an overall program of evaluation.  Here’s how this might be achieved.

  1. All the usual principles of effective feedback apply (non judgmental, based on observed behaviours etc) – see earlier posts.
  2. It is particularly important that it should be timely – if they don’t get the report for 6 weeks they will have forgotten the consultations. I know I have. The more complicated the process the more points there are at which this can fail. Unfortunately my Xmas newsletters are running late and are not very timely this year.  Fortunately, in this context, this probably has no significant impact.
  3. The written report should reinforce what was said in person. No surprises.
  4. The written report speaks to the individual – it is not generic. It asks to be taken personally.  (I am much more likely to read a Xmas newsletter that seems to know who I am).  Filling in forms can be seen as “just a formality” but there is considerable engagement between learner and visitor during the GP Clinical Teaching Visit plus a relative lack of constraint because they aren’t the supervisor (with the attendant conflict of roles). However the conflict between feedback and assessment remains.
  5. Good feedback tries to be specific and behavioural such as “I like the way you listen to the patients at the start of every consultation – keep doing this.” or “remember to ensure that you advise the patient on what to expect and safety-net before the consult finishes” or “As we discussed, I noticed the patient had trouble understanding some of the technical terms you used eg hyponatraemia, globus and crepitations . I suggest practising the use of some lay terms when appropriate or providing explanation. Perhaps you could discuss this specifically with your supervisor in your next teaching session”. “
  6. However, feedback can also include some global encouraging comments such as “Dr Smith demonstrated many attributes of a good family doctor today” or “Dr Singh has a great manner with older patients.” (hopefully the specifics were discussed at the time).
  7. There is no need to cover everything in the narrative feedback – in fact a small number of concise points works best so stick to the most important. This will add weight to them and make them more memorable. (Just like long newsletters are unlikely to be read to the end). The message might be “If you work on anything in the next couple of months, it should be this….”
  8. It has been noted that effective feedback requires accurate self-assessment, reflection and insight on the part of the learner so it is a plus if the report can encourage this.
  9. The written report should suggest ways of improving, developing and exploring – with references and links that can be utilised at a future point in time. For instance, you might comment on their previous lack of experience and current lack of confidence in women’s health and suggest that the FPA course might be useful (and provide a link) or sitting in with Dr X in the practice who does a lot of women’s health.
  10. If ratings have been made at “below the expected level” then it would be useful to make specific comments about these areas and the expected improvements to be made. This requires being aware of the gap between performance and the appropriate standard which is the essence of feedback.  (Perhaps this point can be left out of our Xmas newsletters!)

At the end of the process you will have conscientiously filled in the assessment form AND provided a couple of “take-home messages” that will be worth acting on and revisiting – for all concerned

Diagnosing (and responding to) the struggling learner

I am posting this on the heels of the last post because that one was rather grand and strategic in its approach and I felt it needed to be followed by something more practical that related to day-to-day supervision of trainees. I almost called it “remedial diagnosis” but there is a continuum – from feedback aimed at progressive improvement to focussed interventions to help a learner get up to speed in a particular area and onward to identified areas of major deficit requiring official “remediation”.  We all need to remediate bits of our practice (depending on your definition).

Three weeks ago I went to the dentist and had root canal therapy. This felt like “deep excavation” as in the (unreadable below!) danger sign by the major beach renovation that I noted in my morning walk today.  These problems are often revealed after large storms just as a learner’s problems are often revealed after exams.  Of course some required renovations may be largely cosmetic.

remediation-beach

If some in-training formative assessment has suggested a risk for problems with the final exams (or indeed for performance as a GP at the end of training) the specific needs can only be targeted if the specific problems are identified. This will then suggest a more focussed approach for both learners and supervisors / educators.

What do we know about remediation?

Remediation implies intervention in response to performance against a standard. A recent review concluded (pessimistically, as systematic reviews often do) that most studies on remediation are on undergraduate students, focussed on the next exam, with rare long term follow-up and improvements that were not sustained. Active components of the process could not be identified. (Cleland J et al 2013 The remediationchallenge: theoretical and methodological insights from a systematic review.  Medical Education v47 3 pp242 -51).  A paper appealingly entitled “Twelve tips for developing and maintaining a remediation program in medical education” (Kalet A et al 2016 Med Teacher v38 8 pp787-792) has a few interesting observations but is directed at institutions. It noted the common observation that educators spend 80% of time with 20% of trainees, that many trainees will struggle at some point, may need more or less resources, and yet there is limited recognition of this or investment in resources at any level.  The relevant chapter in “Understanding Medical Education” (Swanwick T) notes that performance is a function of ability plus other important factors.  The quality of the learning and working environment is also important – sometimes maybe the fault lies more with us.  It observes that successful models of remediation aren’t well established and, as with the Kalet article, it advises personalised support rather than a “standard prescription”.

So, we are left a bit to our own devices in diagnosing and managing. Nevertheless, I think we are fortunate in GP training, historically, in that, up to now, we have had a personalised training culture that emphasises, accepts (and, indeed, wants) feedback.  Problems cluster into several areas.

Four common problems and ways to address them

  • Communication skills can sometimes be the most obvious limiting factor in performance. These can be subdivided into language skills (a large and well-addressed topic on its own) or more subtle skills within the consultation – use of words or phrases, jargon, clarity or conciseness, tone of voice, body language etc. These are often picked up on observation (or less often, but notably, from patient feedback). The most useful way to draw these to the attention of the learner, and to begin addressing the issues, is to use video debrief.
  • The easiest diagnosis is lack of knowledge. This might be revealed in a workshop quiz or a teaching visit or in a supervisor’s review of cases. Sometimes GP registrars (particularly if they have done previous training in a sub-specialty) underestimate the breadth of knowledge required for general practice. Sometimes this awareness does not dawn until the exam is failed and they admit “I didn’t take it seriously”. In GP training, considerable knowledge is required for the AKT and it underpins both the AKT and the OSCE. Sometimes the issue is the type of knowledge required. They may have studied Harrison’s (say) and be able to argue the toss about various auto-immune diseases or the validity of vitamin D testing and yet have insufficient knowledge of the up-to-date, evidence-based guidelines for common chronic diseases. They may have very specific gaps, such as women’s health or musculoskeletal medicine, because of personal interests or their practice case-load. In real life the GP needs to know where to go to fill in the gaps that are revealed on a daily basis but, for the exam, the registrar needs to have explored and filled in these gaps more thoroughly. The supervisor can stretch their knowledge in case discussions, monitor their case-load, direct them to relevant resources and touch base re study. Registrars can present to practice meetings (teaching enhances learning). Prior to exams it is useful to practice knowledge tests and follow up on feedback from wrong answers.
  • Consultation skills deficiencies are often about structure. They may be picked up because of difficulty with time management but, equally, there may be problems within the consultation. The registrar may not elicit the presenting problem adequately, convey a diagnosis, negotiate appropriately with the patient regarding management, utilise relevant community resources, introduce opportunistic prevention or implement adequate safety netting. All these skills, and others, are necessary in managing patients safely and competently in general practice. There are many “models” of the GP consultation which can be helpful to learners if discussed explicitly. It can also be useful to have registrars sitting in for short periods with different GPs in the practice in order to observe different consulting methods. However, this is less useful if it is just a passive process and the registrar does not get the chance to discuss and reflect on different approaches. The most useful coaching is direct feedback as a result of observation by supervisors and teaching visitors. This may require extra funding.
  • Inadequate clinical reasoning is the more challenging diagnosis. Good clinical unsafereasoning is something you hope a registrar would have acquired through medical school and hospital experience but this is not always the case. Even if knowledge content and procedural skills are adequate, poor clinical reasoning is an unsafe structure on which to build. This issue may come to light through failure in the KFP or through observation by the supervisor.  It may be necessary to go back to basics. A useful method is to utilise and explore Random Case Analysis (RCA)in teaching sessions.   A helpful article ishttp://www.racgp.org.au/afp/2013/januaryfebruary/random-case-analysis/ particularly the use of “why?” and “what if?” questions when interrogating.Sometimes clinical reasoning needs to be tweaked to be appropriate for general practice. A re-read of Murtagh on this topic is always useful and Practice KFPs can reveal poor clinical reasoning.  Registrars can sometimes be observed to apparently leap towards the correct diagnosis or arrive circuitously at the correct and safe conclusion without the clinical reasoning being obvious.  In these circumstances it is useful to question the registrar about each stage of their thinking and decision making in order to practice articulating their clinical reasoning.In summary

    Remediation “diagnoses” can be made in the areas of communication, consultation skills, knowledge and clinical reasoning (and, no doubt, others). The “symptoms” often come to light during observation, workshop quizzes, in-training assessments, case discussions and practice or patient feedback.  Management strategies include direction to appropriate resources, direct observation, video debriefing, case discussion, practice exam questions (with feedback and action) and random case analysis. Most organisations have templates for relevant “plans” which are useful to keep all parties on track.

    Funders and standard-setters are more likely to have “policies” on remediation rather than any helpful resources on how to do it. There is not much in the literature and it is often difficult to develop expertise on a case by case basis (with much individual variation).  Prior to the reorganisation of GP training in Australia some Training Providers had developed educational remediation expertise which could be utilised in more extreme cases by other providers. As educators we need to develop our own skills, document our findings and share freely with others. Supervisors need to know what to do if they suspect a performance issue ie communication channels with the training organisation should be open.

Evaluation – How do we know we are doing a good job?

There are multiple approaches to evaluation and many are related to predicting outcomes in training or to issues of Quality Improvement.  As professionals, medical educators aim to do their job well and benefit from evaluating what they do.  At a higher level, Program Evaluation is an important issue.  At all levels, evaluation helps you decide where to focus energy and resources and when to change or develop new approaches. It also prevents you from becoming stale. However, it needs curiosity, access to the data, expertise in interpreting it and a commitment to acting on it and there needs to be organisational support.

Doing it better next time

So, at the micro level, I get asked to give a lecture on a particular topic, to run a small group or to produce some practice quiz questions for exam preparation.  How do I know if I do it well or even adequately?  How can I know how to do it better next time?

There are many models of evaluation, particularly at higher levels of program evaluation (if you are keen you could look at AMEE guides 27 and 29 or this http://europepmc.org/articles/PMC3184904 or https://www.researchgate.net/publication/49798288_BEME_Guide_No_1_Best_Evidence_Medical_Education ).  They include the straightforward Kirkpatrick hierarchy (a good example of how a 1950’s PhD thesis in industry went a long way) which places learner satisfaction at the bottom, followed by increased knowledge then behaviour in the workplace and, finally, impact on society – or health of the population in our context.  There are very few studies able to look at the final level as you can imagine.

Some methods of evaluation

The simplest evaluation is a tick box Likert Scale of learner satisfaction.  Even this has variable usefulness depending on the way questions are structured, the response rate of the survey and the timeliness of the feedback.  The conclusions drawn from a survey sent out two weeks after the event with a response rate of 20% are unlikely to be very valid.  Another issue with learner satisfaction is the difference between measuring the presenter’s performance versus the educational utility of the session.  I well recall a workshop speaker who got very high ratings and who was a “brilliant speaker” but none of the learners could list anything that they had learnt that was relevant to their practice.  You could try to relate the questions to required “learning objectives” but these can sometimes sound rather formulaic or generic.  It is certainly best if the objectives are the same as those intended by the presenter and they should be geared towards what you actually intended to happen as a result of the session. When evaluating you need to be clear about your question. What do you want to know?

reflectionIf you add free comments to the ratings with a request for constructive suggestions you are likely to get a higher quality response and one that may influence future sessions.  It is also possible to ask reflective questions at the end of a semester about what learners recall as the main learning points of a session.  After all we are really wanting education that sticks!

Another crucial form of evaluation is review with your peers. Ask a colleague to sit in if this is not a routine happening in your context.  Feedback from informed colleagues is very helpful because we can all improve how we do things.  It is hard to be self-critical when you have poured a large amount of effort into preparing a session and outside eyes may see things we cannot.

To progress up the hierarchy you could administer a relevant knowledge test at a point down the track or ask supervisors a couple of pertinent questions about the relevant area of practice.

Trying out something new

If you want to try an innovative education method or implement something you heard at a conference it is good practice to build in some evaluation so that you can have a hint as to whether the change was worth making.

An example

A couple of years ago I decided to change my Dermatology and Aged Care sessions into what is called Flipped Classroom so I put my powerpoint presentations and a pre workshop quiz online as pre-viewing for registrars.  I then wrote several detailed discussion cases with facilitator notes for discussion in small groups.  I did a similar style with a Multmorbidity session where I turned a presentation into several short videos with voice over and wrote several cases to be worked through at the workshop.

I wanted to compare these with the established method so I compared the ratings to those of the previous year’s lecture session (the learning objectives were very similar).  Bear in mind there is always the problem of these being different cohorts.  I also asked specific questions about the usefulness of the quiz and the small group sessions and checked on how many registrars had accessed the online resources prior to the session.  It was interesting to me that the quiz and the small groups were rated as very useful and the new session had slightly higher ratings in the achievement of learning objectives.  Prior access to the online material made little difference to the ratings.  I also assessed confidence levels at different points in subsequent terms. In an earlier trial of a new method of teaching I also assessed knowledge levels.

Education research is often “action research”.  There is much you can’t control and you just do the best you can. However, if you read up on the theory, discuss it with colleagues and see changes made in practice then it all contributes to your professional development.  Sharing it with colleagues at a workshop adds further value.

warningSome warnings

Sometimes evaluations are done just because they are required to tick a box and sometimes we measure only what is easy to measure.  Feedback needs to be collected and reviewed in a timely fashion so that relevant changes can be made and it is not just a paper exercise. There is no point having the best evaluation process if future sessions are planned and prepared without reference to the feedback.  It would be good if we applied some systematic evaluation to new online learning methodologies and didn’t just assume they must be better!

Evaluation is integral to the Medical Educator role

A readable article on the multiple roles of The Good Teacher is found in AMEE guide number 20 at http://njms.rutgers.edu/education/office_education/community_preceptorship/documents/TheGoodTeacher.pdf

Evaluation is a crucial part of the educator role and the educator’s role is diminished and the usefulness of any evaluation is curtailed when the two (education and evaluation) are separated.  Many things have an influence on training outcomes including selection into training, the content and assessment of training and the processes and rules around training. As an educator you may have increasingly less influence over decisions about selection processes and even over the content of the syllabus.  However, you may still have some say in what happens during training.  I would suggest that the less influence educators have in any of these decisions the less engaged they are likely to be.

At the level of program evaluation by funders, these tasks are more likely to be outsourced to external consultants with a consequent limitation in the nature of the questions asked, a restriction in the data utilised and conclusions which are less useful.  “Statistically significant” results may be educationally irrelevant in your particular context..  Our challenge is to evaluate in a way which is both useful and valid and helps to advance our understanding as a community of educators.  A well thought out study is worth presenting or publishing.

 

Observation – the Teaching Visit

For both assessment and effective feedback you can’t beat direct observation (I’ll look at associated assessment issues later).  Those who train in hospitals are not short of opportunities for observation with ward rounds, teams and open-plan spaces.  However, tools still had to be introduced in an effort to facilitate feedback in the busy clinical environment.  GP training presents a different challenge.  In the worst scenario the registrar lands in a practice and begins to practice, unobserved, behind the closed door of the consulting room.  This is probably a hangover from the days when doctors were assumed to be ready to enter general practice straight from hospital experience.  On the other hand, GP training currently has a significant advantage in its built-in requirement for Clinical Teaching Visits (CTVs):  several hours dedicated to direct observation of clinical activity within the practice setting.  The practical process for doing this is outlined by the training organisations.  I will cover the value of the supervisor “sitting in” in a later post.

The luxury of the CTV

The Clinical Teaching Visit does not merely provide a snapshot of diagnostic acumen in a particular presentation, an isolated management skill or some procedural competency. It provides the opportunity to observe and provide immediate feedback on the full breadth of GP skills: communication, diagnosis, prevention, management strategies, follow-up, test-ordering, prescribing, safety-netting, awareness of community resources and functioning within a team.  It is also from an “external” perspective.”  It provides the opportunity to act on (and practice) the feedback in the subsequent cobeachnsultations (eg “how about you try to listen for a minute at the start of the next consultation.”).   If used to its potential the CTV also enables evaluation of the practice and supervisory environment which are crucial to quality education and training.

You get to see the bigger picture.

A closer look

There are several useful tools to focus the mind when registrars are being observed by either supervisor or teaching visitor. These tend to categorise and provide a rating scale for some of the micro-skills of consulting.  There are educational reasons for doing CTVs which address broader issues.  It can be an opportunity to develop the productive mentoring relationship between Educator and Registrar.  It is an opportunity for supporting, guiding, challenging and encouraging (not just assessing).  If we fail to do these things we are probably providing a sub-optimal teaching visit.  It is an individualised experience where the skilled visitor is responsive to the trainee’s context, competence and potential.

Communication skills

Feedback on communication skills includes the usual advice about listening, how to ask questions and avoiding jargon in explanations but the observer can also often identify non-verbal communication, tone of voice or habitual use of words and feed back in a timely fashion while it is fresh in the minds of both.  In the CTV, the observer can note, more holistically, whether the doctor is able to adapt their approach to different patients.

Clinical knowledge

featherThe breadth of knowledge addressed is often limited by the patients seen although the range, over four to eight patients in a session, can be significant. However, identifying, within the specific context, a few discrete areas for further study is powerful and can include suggestions for self-directed learning.

Practical consultation skills

shell coloursThese are often the hidden gems in a teaching visit where an experienced GP is able to proffer practical suggestions: “Here’s a tip for writing your computer notes ….”, “You could try this, to manage several presenting problems……”, “Here’s a way to do a concise neuro exam…” These are often remembered long after the visit – and even after being Fellowed.

Encouragement

bird coupleDon’t underestimate the value of positive collegial statements such as “I like the way you did…”, “I learned something from you today when you said….”, “You are a great family doctor”.

The written report

The written report is able to summarise the patterns revealed in successive consultations and log the registrar’s progress in competence over several visits. There is not much evidence on the effectiveness  of written (as opposed to verbal) feedback.  Qualitative free text is often valued higher than ratings.  Teaching Visitors may write reports differently (depending on how constrained they are by the template).  sandHaving read hundreds of reports, I can see different benefits in different approaches.  Some couch feedback around management of the specific patient case whereas others summarise general competencies in the conclusion.  Some give snippets of practical advice and others provide registrars with numerous up-to-date links to relevant resources on the internet.  In the best case scenario, these differences would be modified to suit the learner being observed.  It is added value if the recorded document is one to which the registrar might usefully refer when relevant. It is probably important to go back to basics and ask yourself why and for whom this report is being written: for the supervisor, for the learner, for the training organisation, for assessment, for legal reasons etc.  Purpose should inform method (and direct any evaluation).

What works in CTVs?

Registrars often comment, at the end of training, that the CTVs were the most useful part of training. We can speculate about what works in teaching visits but it has also been looked at.  If you google“How useful are clinical teaching visits to registrars and supervisors” you will find a presentation from a previous General Practice Education and Training (GPET) conference.  This study surveyed and interviewed supervisors and registrars (in all terms) over a six month period.  95% of registrars reported they were encouraged to identify areas of clinical knowledge and consultation skills for improvement .  Only 13% of registrars found the feedback surprising or challenging. 94% of supervisors felt they were provided with useful information about their registrar’s skills and the majority reported intentions to modify areas of teaching and supervision. The study also looked at practical issues of the organisation of visits and came up with suggestions (from registrars and supervisors) for overall improvements which included:  some variability of visitors, having supervisors also sit in, more communication with the supervisor during the visit, more time for discussion, and attention to timing (not late in the term).  It was noted that the report should only contain feedback discussed at the visit and could link clinical feedback to articles for further study.

Feedback – what works?

gap

Mind the gap

A gap is frequently noted between what the teacher “gives” and what the learner appears to “receive” or act upon and this is characterised as the effectiveness of the feedback . Ramaprasad’s original (1983) educational definition of feedback suggested that the process itself was not “feedback” unless translated into action. It was inherent in the definition. This view demonstrates the concept of a “loop” which is present in some of the original uses of the term in other disciplines including  economics, electronics and biological systems to name but a few.  It was appropriated as a metaphor into education and like many metaphors developed an existence of its own (and became mandatory).  Feedback in a biological system describes an automatic ongoing process but in medical education it becomes prescriptive.  However, you can sometimes identify with a pituitary churning out chemicals toward an unresponsive thyroid or ovary.  On the other hand, maybe some learners see some educational hypopituitarism out there!

Feedback is more often used to describe the content part of a one way (and hierarchical) process of interaction and there are certainly many organisations these days who claim to “welcome your feedback” and yet have no intention of acting on it.  Similarly educational organisations compel their staff to collect student feedback but how often does this affect the program?  Job done, box ticked.  Education itself sometimes morphs into just information thrown at the learner or perhaps leaflets dropped from a drone and thereby avoiding all human interaction.

Is there any evidence about what works to close the loop? Sometimes this is in short supply and it is hard to distinguish evidence from opinion, whether expert or otherwise.  However, I guess if dropping feedback from the air resulted in the required improvement in performance then we would say it was effective so we shouldn’t  necessarily just go with what feels good or like “common sense”.

What helps the learner to catch what is thrown and to run with it?

There are numerous reviews of the effects of feedback in medical education. There is very limited evidence so far regarding the effectiveness of Work Based Assessments (WBAs) which are now widely mandated.  It would be fair to summarise by saying that the unimpressiveness and lack of agreement in the area are likely due to the fact that the educational context is so complex. One writer went so far as to ask whether, given the complexity, feedback can ever be effective.

A slightly different perspective from the evidence 

A few points that strike me, from the reviews mentioned below, include the importance of slightly less tangible aspects:

  • A culture of feedback
  • Relationship – its quality and continuity
  • The credible source for the feedback – it is worth reflecting on this
  • The receiver of the feedback
  • Adaptability – sometimes the learner needs support, sometimes facilitation, sometimes correction. What works for teaching procedures may not work for communication skills and so forth.

Interesting Links 

A “respectful learning environment” is one of the “Twelve tips for giving feedback effectively in the clinical environment” found here http://sse.umontreal.ca/apprentissage/documentation/Ramani_KracKov_2012_Tips_Feedback.pdf,  An article by Archer in Medical Education suggests that the practical models for delivering feedback are reductionist and “only feedback seen along a learning continuum within a culture of feedback is likely to be effective.” It’s worth reading. http://www.pubfacts.com/detail/20078761/State-of-the-science-in-health-professional-education:-effective-feedback.  The BEME review concluded that “the most effective feedback is provided by a credible, authoritative source over a number of years”. This was reiterated in other reviews http://www.bemecollaboration.org/Published+Reviews/BEME+Guide+No+7/

I find it interesting that so many articles begin with the premise that feedback somehow needs to happen despite ubiquitously less-than-ideal circumstances (the teacher, the busyness, the system etc). Hence “Teaching on the Run”, the “One Minute Preceptor” and the “Mini”CEX.  I will write a post later relating to the Clinical Teaching Visit which is a more dedicated educational activity with its rich clinical resource and opportunity for feedback.  It is also where feedback and assessment begin to intersect.

How feedback is done is probably more important than which tool is used. The systematic review by Saedon commented that “If WBAs are simply used as a box-ticking exercise, without sufficient emphasis on feedback, then any gains will be limited.” However, learners tended to report that written feedback was more helpful than number ratings. http://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-12-25 Even Gen Y report preferring face-to-face interaction rather than the computer screen.

Where next?

Perhaps we should be spending more effort on ensuring that the context is conducive to good feedback rather than just concentrating on the tools and strategies for the isolated feedbducks 2ack episode. This would mean (in our relatively short GP Training) making the most of relationships with supervisors and medical educators.  Perhaps the skills and preparation for receiving feedback are just as crucial as the mandated  “Learning Plan,”  particularly in the light of the lack of evidence of the validity of self-assessment.  And, finally, perhaps we should have a go at evaluating some of what we do.

Feedback 101 – “You’re going great!”

grevilliaIs that sufficient and does it matter? As educators we like to be positive.  One of the most important things we do in education is to provide feedback. In fact, at the coal face of medical education, it is probably one of the first things we do.  We all have our own ways of doing this – sometimes well and sometimes badly.  When asked about their training, GP registrars often state that feedback is what they don’t get enough of and that they want it to be more useful.  They want to know how they are going.

There is a Train the Trainer manual which was produced by a number of Medical Educators in 2008. It is now in need of updating.  However, it lists two ME competencies which are still relevant

  • provide constructive feedback that is learner-centred and balanced
  • guide learners to self-reflect on their performance

Why feedback?

 “Feedback”, as we understand it, is obviously something that was being done long before the term was invented. Presumably, mediaeval stonemasons gave feedback to their apprentices who were aware of the finished cathedrals.  The stonemasons demonstrated how stones should be cut and made comments on the attempts made by the apprentices in order to improve that performance.  The medical education field has taken on the concept of “feedback” as addressing “the gap between the actual level of displayed learning and the reference level of learning” (Ramaprasad, 1983).  Even these two concepts (performance and required standard) are tricky to measure.  However, a considerable amount of effort has been put into describing the process.

Most concise, practical accounts of feedback in medical education start (and sometimes end) with what is known as the feedback sandwich: say something positive, then something “constructive”/critical, then finish with something positive. Another early framework follows Pendleton’s Rules (1984): the learner is asked what was done well, the observer then notes what was done well, the learner states what could be improved and the observer then notes what could be improved. The observer might also ask the learner how it could be improved and some sort of plan is made to address this.

Many supervisors and educators do all of this intuitively and it is a pleasure to watch. Sometimes you can go by the rules and they gradually become second nature.  The important points are:

  • be supportive while providing
  • constructive feedback and
  • keeping an eye on the appropriate endpoint / standard

As educators we want to make every part of the education process as useful, relevant and effective as possible and feedback can help to do this.  Effective feedback, in general, should be:

  • Timely – so that it can be acted on whilst the learner still recalls what they did
  • Constructive, non-judgmental and focussed on observed behaviour
  • Individualised so that it is relevant to the individual
  • Credible – the learner must trust in the validity of the feedback
  • Specific and practical – enough information to suggest future learning but not so much that it risks overwhelming the learner (bite sized pieces)
  • Relevant to assessment outcomes

These are common “prescriptions” for feedback and I suspect I have sometimes demonstrated the opposite characteristics!  They have face validity and perhaps some have evidence behind them – might pursue some of these thoughts later.  Sometimes they are just apparent “common sense” that someone had the initiative to put into bullet points on a power point at some point in MedEd history.

Some PURLS

The most concise summary of feedback is in tip 10 of the Australian Teaching on the Run (TOTR) series. Courses of TOTR are often run in hospitals and address many aspects of clinical teaching.http://www.meddent.uwa.edu.au/teaching/on-the-run/tips/10

The following is a longer article (from the London Deanery, UK) with a few more suggestions on how to do it and the words to use. It also describes the barriers to giving effective feedback. http://www.faculty.londondeanery.ac.uk/other-resources/files/BJHM_%20Giving%20effective%20feedback.pdf

The Clinical Teaching Visit in Australian GP Training offers a great opportunity for quality feedback within the context of the whole consultation. This can address broader issues than just examination and clinical skills (maybe more about this later).

A common comment from registrars is that they wish supervisors would give more constructive feedback than just “you’re going really well.” – even if it is well meant.