Monthly Archives: September 2016

Diagnosing (and responding to) the struggling learner

I am posting this on the heels of the last post because that one was rather grand and strategic in its approach and I felt it needed to be followed by something more practical that related to day-to-day supervision of trainees. I almost called it “remedial diagnosis” but there is a continuum – from feedback aimed at progressive improvement to focussed interventions to help a learner get up to speed in a particular area and onward to identified areas of major deficit requiring official “remediation”.  We all need to remediate bits of our practice (depending on your definition).

Three weeks ago I went to the dentist and had root canal therapy. This felt like “deep excavation” as in the (unreadable below!) danger sign by the major beach renovation that I noted in my morning walk today.  These problems are often revealed after large storms just as a learner’s problems are often revealed after exams.  Of course some required renovations may be largely cosmetic.

remediation-beach

If some in-training formative assessment has suggested a risk for problems with the final exams (or indeed for performance as a GP at the end of training) the specific needs can only be targeted if the specific problems are identified. This will then suggest a more focussed approach for both learners and supervisors / educators.

What do we know about remediation?

Remediation implies intervention in response to performance against a standard. A recent review concluded (pessimistically, as systematic reviews often do) that most studies on remediation are on undergraduate students, focussed on the next exam, with rare long term follow-up and improvements that were not sustained. Active components of the process could not be identified. (Cleland J et al 2013 The remediationchallenge: theoretical and methodological insights from a systematic review.  Medical Education v47 3 pp242 -51).  A paper appealingly entitled “Twelve tips for developing and maintaining a remediation program in medical education” (Kalet A et al 2016 Med Teacher v38 8 pp787-792) has a few interesting observations but is directed at institutions. It noted the common observation that educators spend 80% of time with 20% of trainees, that many trainees will struggle at some point, may need more or less resources, and yet there is limited recognition of this or investment in resources at any level.  The relevant chapter in “Understanding Medical Education” (Swanwick T) notes that performance is a function of ability plus other important factors.  The quality of the learning and working environment is also important – sometimes maybe the fault lies more with us.  It observes that successful models of remediation aren’t well established and, as with the Kalet article, it advises personalised support rather than a “standard prescription”.

So, we are left a bit to our own devices in diagnosing and managing. Nevertheless, I think we are fortunate in GP training, historically, in that, up to now, we have had a personalised training culture that emphasises, accepts (and, indeed, wants) feedback.  Problems cluster into several areas.

Four common problems and ways to address them

  • Communication skills can sometimes be the most obvious limiting factor in performance. These can be subdivided into language skills (a large and well-addressed topic on its own) or more subtle skills within the consultation – use of words or phrases, jargon, clarity or conciseness, tone of voice, body language etc. These are often picked up on observation (or less often, but notably, from patient feedback). The most useful way to draw these to the attention of the learner, and to begin addressing the issues, is to use video debrief.
  • The easiest diagnosis is lack of knowledge. This might be revealed in a workshop quiz or a teaching visit or in a supervisor’s review of cases. Sometimes GP registrars (particularly if they have done previous training in a sub-specialty) underestimate the breadth of knowledge required for general practice. Sometimes this awareness does not dawn until the exam is failed and they admit “I didn’t take it seriously”. In GP training, considerable knowledge is required for the AKT and it underpins both the AKT and the OSCE. Sometimes the issue is the type of knowledge required. They may have studied Harrison’s (say) and be able to argue the toss about various auto-immune diseases or the validity of vitamin D testing and yet have insufficient knowledge of the up-to-date, evidence-based guidelines for common chronic diseases. They may have very specific gaps, such as women’s health or musculoskeletal medicine, because of personal interests or their practice case-load. In real life the GP needs to know where to go to fill in the gaps that are revealed on a daily basis but, for the exam, the registrar needs to have explored and filled in these gaps more thoroughly. The supervisor can stretch their knowledge in case discussions, monitor their case-load, direct them to relevant resources and touch base re study. Registrars can present to practice meetings (teaching enhances learning). Prior to exams it is useful to practice knowledge tests and follow up on feedback from wrong answers.
  • Consultation skills deficiencies are often about structure. They may be picked up because of difficulty with time management but, equally, there may be problems within the consultation. The registrar may not elicit the presenting problem adequately, convey a diagnosis, negotiate appropriately with the patient regarding management, utilise relevant community resources, introduce opportunistic prevention or implement adequate safety netting. All these skills, and others, are necessary in managing patients safely and competently in general practice. There are many “models” of the GP consultation which can be helpful to learners if discussed explicitly. It can also be useful to have registrars sitting in for short periods with different GPs in the practice in order to observe different consulting methods. However, this is less useful if it is just a passive process and the registrar does not get the chance to discuss and reflect on different approaches. The most useful coaching is direct feedback as a result of observation by supervisors and teaching visitors. This may require extra funding.
  • Inadequate clinical reasoning is the more challenging diagnosis. Good clinical unsafereasoning is something you hope a registrar would have acquired through medical school and hospital experience but this is not always the case. Even if knowledge content and procedural skills are adequate, poor clinical reasoning is an unsafe structure on which to build. This issue may come to light through failure in the KFP or through observation by the supervisor.  It may be necessary to go back to basics. A useful method is to utilise and explore Random Case Analysis (RCA)in teaching sessions.   A helpful article ishttp://www.racgp.org.au/afp/2013/januaryfebruary/random-case-analysis/ particularly the use of “why?” and “what if?” questions when interrogating.Sometimes clinical reasoning needs to be tweaked to be appropriate for general practice. A re-read of Murtagh on this topic is always useful and Practice KFPs can reveal poor clinical reasoning.  Registrars can sometimes be observed to apparently leap towards the correct diagnosis or arrive circuitously at the correct and safe conclusion without the clinical reasoning being obvious.  In these circumstances it is useful to question the registrar about each stage of their thinking and decision making in order to practice articulating their clinical reasoning.In summary

    Remediation “diagnoses” can be made in the areas of communication, consultation skills, knowledge and clinical reasoning (and, no doubt, others). The “symptoms” often come to light during observation, workshop quizzes, in-training assessments, case discussions and practice or patient feedback.  Management strategies include direction to appropriate resources, direct observation, video debriefing, case discussion, practice exam questions (with feedback and action) and random case analysis. Most organisations have templates for relevant “plans” which are useful to keep all parties on track.

    Funders and standard-setters are more likely to have “policies” on remediation rather than any helpful resources on how to do it. There is not much in the literature and it is often difficult to develop expertise on a case by case basis (with much individual variation).  Prior to the reorganisation of GP training in Australia some Training Providers had developed educational remediation expertise which could be utilised in more extreme cases by other providers. As educators we need to develop our own skills, document our findings and share freely with others. Supervisors need to know what to do if they suspect a performance issue ie communication channels with the training organisation should be open.

Early identification of the struggling learner

The holy grail and silver bullets

Early identification of learning needs is of course the holy grail of much education and vocational training. It has become even more pertinent in GP training since time lines for completion of training have been tightened and rigidly enforced.  Gone are the days of relatively leisurely acquisition and reinforcement of knowledge and skills, with multiple opportunities for nurturing the best possible GP skillset.

Consequently there is an even more urgent search for the silver bullet – that one test that will accurately predict potential exam failures whilst avoiding over identifying those who will make it through regardless (effort and funds need to be targeted). If it all sounds a bit impersonal well…..there’s a challenge.

fairytale-castle

Often the term “at risk registrar” is used but I have limited this discussion to the academic and educational issues in training. The discussion on predictors also often strays into the area of selection in an effort to predict both who will succeed in training and who will be an appropriate practitioner beyond the end of training – but this is beyond the scope of this discussion, although it does suggest utilising existing selection measures.

The literature occasionally comes up with interesting predictors (Most of it is in the undergraduate sphere. Vocational training is less conducive to research).  There are suggestions, for instance, that students who fail to complete paperwork and whose immunisations are not up to date are likely to have worse outcomes.  This is not totally surprising and rings true in vocational training perhaps as the Sandra/Michelle/fill-in-a-name test.  The admin staff who first encounter applicants for training are often noted to predict who will have issues in training. This no doubt is based on a composite of attributes of the trainees and the experienced admin person’s assessment is akin to a doctor’s informed clinical judgment.  However it is not numerical and would not stand up on appeal. It is often an implicit flag.   Obviously undergraduate predictors may be different from post graduate predictors but there is always a tendency to implement tools validated at another level of training. They should then be validated in context.

Note that, once in training, the reason for identifying these “at risk” learners is in order to implement some sort of effective intervention in order to improve the outcomes. This requires diagnosis of the specific problems.

Thus, there is interest in finding the one test that correlates with exam outcomes – and there may be mention of P values, ROC curves, etc. Given that different exams test different collections of skills, it is not surprising that one predictor never quite does the job.  As an educator I’m not happy that something just reaches statistical significance but is too ambiguous to apply on the ground.  I want to feel confident that a set of results can effectively detect at least the extremes of learner progression through training: those who will sail through regardless and those who are highly likely to fail something (if no extra intervention occurs).

veg2The “triple test”

If an appropriate collection of warning flags is implemented, then the number of flags tends to correlate with exam outcomes (our only current gold standard). It is possible to identify a small number of measures that do this best and work has been done on this.  This measure + that measure + a third measure can predict exam outcomes with a higher degree of accuracy.  My colleague, Tony Saltis, interpreted this as “like a triple test”.  It appeared to me that this analogy might cut through to educators who are primarily doctors.  In the educational sphere this analogy can be extended (although one should not push analogies too far).  Combining separate tests can provide extra predictive accuracy.  In prenatal testing there have been a double test and in some places a quadruple test and now there is the more expensive cell-free foetal DNA which is not yet universally used. There are pros and cons of different approaches.  Extra sensitivity and specificity for one condition does not mean that a test detects all conditions and, of course, in Australia, the different modality of ultrasound added to that particular mix.

Any chosen collection of tests will not be the final answer. Each component of any “triple (or quadruple) test” should have the usual constraints of being performed in the equivalent of accredited and reliable labs, in a consistent fashion and results of screening tests should be interpreted in the context of the population on which they are performed.  They also need to be performed at the most appropriate time.

Hints in the evidence

I have previously found that rankings in a pre-entry exam-standard MCQ are highly predictive of final exam results. However, to apply this in different contexts there is a proviso that it be administered in exam conditions and the significance of specific rankings can only confidently be applied to the particular cohort. The addition of data from interview scores, possibly selection bands, from types of early in-training assessments and patient feedback scores appear to add to this accuracy, in the data examined – particularly for the OSCE exam.  (Regan C Identifying at risk registrars: how useful are components of a Commencement Assessment? GPTEC Hobart August 2015).  Research is also ongoing in Australian GP training in other regions by Neil Spike and Rebecca Stewart et al (see GPTEC 2016).  I would suggest that the pattern of results is important.

beach-patterns

The way forward

Now that GP training in Australia has been reorganised geographically it is up to the new organisations (and perhaps the colleges) to start collecting all the relevant data anew and to ensure it is accessible for relevant analysis. There is much data that can potentially be used but there needs to be a commitment to this sort of evaluation over the long term. It should not be siloed off from the day to day work of educators who understand the implementation and significance of these data.

Utilising data already collected would obviously be cost-effective and time-efficient – in addition to any additional tools devised for the purpose. I suspect there is a useful “triple test” in your particular training context but you need to generate the evidence by follow-up. Validity does not reside in a specific tool but includes the context and how it is administered.  There needs to be an openness to future change depending on the findings.  The pace of this change (or innovation) can, ironically, be slowed by the need to work through IT systems which develop their own rigidity.

This is an exciting area for evidence-based education and the additional challenge is for collegiality, learning from each other and sharing between training organisations. Only then can we claim to be aiming for best practice.

Of course the big question is, having identified those at risk – what and how much extra effort can you put in to modify the outcomes and what interventions have proven efficacy?

Role Models

This week was Book Club for me and it was my turn to choose the book which was Gratitude by Oliver Sacks. I have been a fan of Sacks’ books since I first read Awakenings.  As background to Gratitude I also read “On the Move”, the second part of Sacks’ autobiography.   As usual I was impressed by the amazing details that memoirists appear to recall from their early years but, in his case, it appears he had kept a journal.  Anyway, the topic for this post was raised for me by Sacks’ recollections of his first “house job” at Middlesex Hospital in 1959.  His second six months was on the neurological unit and he reminisces about his two chiefs, “a brilliant but almost comically incongruous pair.”

Bear in mind that this job was in the days of Doctor in the House, when eminence was above evidence and eccentricity was a virtue.

One boss was described as “genial, affable, suave” with an odd “slightly twisted smile”.   He was someone who “seemed to have all the time in the world for his housemen and his patients.” Patients found his presence therapeutic and he remained interested “in the lives of his housemen long after they had moved on.”   The other boss was “sharp, impatient, edgy, irritable” with “ferocious, jet-black eyebrows” and an apparent dislike of housemen and patients.  We have probably all experienced such scary bosses.   He recalled they had very different approaches to examining patients: one being extremely methodical and thorough with emphasis on diagnostic algorithms whereas the other was extremely intuitive.

He sums up by saying “Both influenced me, in good but different ways….More than fifty years later, I remember them both with affection and gratitude.”

Reading this made me think about how, regardless of our particular intentions (as supervisor or educator) learners will be taking on board certain messages conveyed by us whether we know it or not. It also suggests that learning in this way requires discernment by the learner.

The literature

There is much evidence in the literature about the effect of role models in medical education. If you want to pursue this further there is a BEME Guide No 27 on Doctor role modelling in medical education. Med Teach. 2013 Sep;35(9):e1422-36 http://www.ncbi.nlm.nih.gov/pubmed/23826717

Role models have been shown to affect career choices and studies have looked at some of the attributes of “good” role models. The review noted the complex nature of role modelling and the need to understand negative modelling such as when undergraduates experience conflict between what they are taught and what is modelled in their hospital clerkships.

An AMEE Guide No 20 discusses two aspects – clinical and teaching role models – in The good teacher is more than a lecturer- the twelve roles of the teacher  RM Harden & J Crosby Medical Teacher, Vol. 22, No. 4, 2000 http://www.tandfonline.com/doi/abs/10.1080/014215900409429

role-models-text2

Don’t underestimate role modelling

In medical education and professional development we often focus on direct teaching methods or curricular content.  The textbook Understanding Medical Education: evidence, theory and practice (Swanwick, 2013) addresses the topic of role models.  It notes “the literature continues to support role-modelling as a pervasive means of teaching and a powerful means of learning.”  It suggests that teachers model a long list of things including: “knowledge, attitudes, behaviour, approaches to problems, applications of knowledge and skill, and interactions with colleagues, learners, other health professionals, patients, and families”.  Numerous articles describe it as the most effective strategy in clinical education, comparing favourably to lectures or small groups in some areas. It is acknowledged as particularly important in terms of professional conduct and good practice.  It is a process that is core to training that is based on an apprenticeship model. If it is so important we should give it more attention.

The RACGP standards for training note that it is “essential that all supervisors provide excellent professional and clinical role modelling”. The limited description lists unrestricted registration, professional involvement in the profession, Fellowship and participation in CPD aimed at improving performance as a general practice educator.   Curricula for supervisors tend to specify the content of what should be modelled (eg use of EBM) but this obviously changes with context and priorities.

What are the characteristics of a good role model?

role-models-2Characteristics identified in numerous studies include: enthusiasm for medicine, clinical reasoning skills, holistic care, good communication skills and doctor / patient relationship. Registrar feedback I have noted includes comments on supervisors as: enthusiastic, keen to teach, friendly, relaxed, encouraging patient care, excellent mentor, high medical standard, advice always up-to-date, up-to-date practice, good ethics, committed and high quality supervision.  Similarly, registrars are generally very aware of characteristics they don’t want to emulate.

But what are the real-life challenges?

There are increasing numbers of medical professionals needing to be trained and the recruitment of supervisors in many settings can be ad hoc with priority being given to convenience and capacity rather than quality. In any case, quality is difficult to assess and do learners require perfect role models anyway?  Fortunately not. In any case, it has presumably not been the case in the past and is likely to be unachievable in the future so mandating it (in the absence of evidence as to what constitutes the ideal) would be counter-productive.

Perhaps the most sensible approach is three-pronged

Role-Modelling happens regardless, so

  1. Make it a priority to encourage clinical teachers to develop a conscious awareness of role-modelling. This is the conclusion of the BEME paper. It needs to be recognised as a distinct process which is as influential as teaching in a formal training session. Education and reflection on this may enhance the process
  2. Facilitate registrars’ participation in the process by encouraging conscious reflection on what they are learning from the example of their supervisors, whether positive or negative. There are many benefits to this such as practising reflection and taking more charge of their own learning
  3. The relevant system / organisation needs to support and facilitate the above in various ways, given its importance

Maybe in fifty years’ time the learners will still be reflecting on their experiences as did Oliver Sacks.

ten-ducks-long