Monthly Archives: December 2018

Teaching critical thinking – and other “soft skills”

I noticed this tweet and reply this week. When indeed did critical thinking become a soft skill? That response was by a medical researcher. No wonder we have to keep thinking about teaching it, if it isn’t given priority in the previous levels of education. Perhaps critical thinking is not welcome when it comes to voting but in the field of medical practice a critical mind is crucial. It’s obviously crucial in research and its crucial in all stages of applying evidence based practice.

I had previously had in mind to do a blog post of the “non-clinical skills” that are required to be taught in training. This was stimulated by a talk I went to in September at the EURACT conference which talked about the new capabilities framework in the UK https://www.gmc-uk.org/-/media/documents/Generic_professional_capabilities_framework__0817.pdf_70417127.pdf to be applied to all postgraduate training. The talk was by someone from the RCGP who described how the GMC have now mandated new domains to be continually assessed during training. These include, for example, capabilities relating to patient safety and quality, leadership, research, professional knowledge, education and training and so forth. So far so good.  It did not sound too different from the roles in the Canmeds framework (although the “continually assess” sounded a bit ominous). The speaker stated that these were to have equal emphasis with clinical knowledge and skills. They had enthusiastically embarked on teaching a research component in GP training which had been evaluated positively by selected and engaged supervisors and registrars. They reported they were now faced, however, with rolling it out to everyone and admitted there might be challenges with non-engaged supervisors and struggling registrars.

These non-clinical skills are obviously relevant for clinical people and perhaps even more so for those involved in education. I recall from my student days when students (and some staff) tended to give a bit of a nod and a wink to the equal weight given to the different domains in the curriculum. Similarly it is quite frustrating as an educator when learners object to spending time on “soft stuff” that we think is crucial and worthwhile. On the other hand it is quite understandable for learners to feel under pressure from an increasingly busy curriculum, the need to pass assessments and the perceived safety of patients if their clinical skills are not up to scratch.

I think there are a few points that can be noted from this. There is a responsibility for educators to evaluate the curriculum and ensure it is not just comprehensive and responsive to various stakeholders (academic, political, legal or regulatory bodies) but also meaningful to the learners. We also need to convey why these other domains are important and how multiple competencies contribute to the performance of a clinical activity. From an educational perspective are these capabilities teachable and are they assessable? From a teaching perspective it is also preferable to teach in context. So, “critical thinking skills” are more effectively taught in a context meaningful to the adult learner (probably the clinical context for most medical learners) and not abstracted from the domain specific content which the learners are seeking to master. I addressed this aspect in an earlier post http://mededpurls.com/blog/index.php/2018/10/04/making-the-implicit-explicit-a-core-concept-in-clinical-teaching/  This may help prevent them from being siloed as a “soft skill” but this complexity does however make it more of a challenge to document and be accountable in a managerial sense – and this latter priority often predominates. Learning in the context involves articulating your thinking and this is particularly so for the clinical supervisor.

So, consider, with our increasingly impressive curricula and standards, is there a divide between what is stated, what we actually teach and what we test? And how is it perceived by the learners? We need to monitor the bigger picture of what we are doing in education. Bear in mind that some similar requirements may one day be coming to a training program near you.

 

Confound it! How meaningful are evaluations?

We generally state that we should evaluate our teaching. It’s a good thing.  But how meaningful are some of these exercises?

Just recently I saw a link on twitter to this delightful article entitled Availability of cookies during an academic course session affects evaluation of teaching” https://www.ncbi.nlm.nih.gov/pubmed/29956364  If this had been published in December in the BMJ I might have thought it was the tongue-in-cheek Christmas issue.  Basically it reported that the provision of chocolate cookies during a session in an emergency medicine course affected the evaluations of the course. Those provided with cookies evaluated the teachers better and even considered the course material to be better and these differences were reported as statistically significant.  Apart from making you think it might be worth investing in some chocolate chip cookies, what other lessons do you draw from this fairly limited study?

There were a few a few issues raised by this for me. Firstly it made me reflect on the meaningfulness of statistically significant differences in the real world.  Secondly it made me quite concerned for those staff in organisations such as universities where weight is sometimes placed on such scores when staff are being judged.  This is because, thirdly, it brought to mind the possible multiple confounders in attempts at evaluation.

When considering your evaluation methods, consider various issues: how timely is it if it is participant feedback? If you intend to measure something further down the track such as knowledge acquisition, at what point is this best done?  Question the validity and reliability of the tools being used.  Consider using multiple methods.  Consider how you might measure the “hard to measure” outcomes.  If programs (rather than just individual sessions) are being evaluated then ratings and judgments should be sourced more broadly.  I will not go into all the methods here (there is a good section in Understanding Medical Education: Evidence, Theory and Practice ed. T Swanwick) but the article above certainly raised points for discussion.

Surprisingly, student ratings have dominated evaluation of teaching for decades. If evaluations are in an acceptable range, organisations may unwittingly rest on their laurels and assume there is no need for change.  Complacency can limit quality improvement when there is an acceptance that the required boxes have been ticked but where the content of the boxes is not questioned more deeply.

More meaningful quality improvement may be achieved if evaluation methods are scrutinised more closely in terms of the relevant outcomes. Outcomes that are longer term than immediate participant satisfaction could be more pertinent and some quality measures may be necessary over and above tick box responses.  The bottom line is, consider the possible confounders and question how meaningful your evaluation process is.