Selection into Surgical Training
created 01 December 2016
Many changes have taken place in surgical training since 2007; those in selection have perhaps been the most far reaching. Thought, hard work and organisation have resulted in structured, fair and transparent process. Selection is much better than it used to be but nothing is perfect and evolutionary development must continue.
When I and others of my generation went through training we applied to a single hospital, a small group of hospitals or to a region after scouring the BMJ for adverts which appeared on a weekly basis. If we were lucky we got onto a rotation or, alternatively, into a single time limited job. We were interviewed by locally eminent consultants each asking their own favourite questions. There was little or no structure, interviewers could decide their own priorities and simple voting determined the outcome. A recognised hierarchy of seniority among local trainees often influenced the outcome and nepotism, real or perceived, was often complained of. It was all too easy for a single consultant to blight the career of individuals to whom they took a dislike.
We were, however, able to try out different specialties and there’d always be another advert next week. Those of us lucky enough to get into the senior registrar jobs had been watched for some years and judged to be capable, but it was difficult to break into a different region where we weren’t known.
Things had to change with Modernising Medical Careers (MMC). Two grade (Core and Specialty) or one grade (Run Through) training made selection a high stakes process which could no longer include the vagaries described above. The first attempts at modernising selection were a well publicised failure. However, persistence, hard work and a successful pilot project resulted in the National Selection system we now have.
A published Person Specification describes essential and desirable, generic and specialty specific attributes. This provides a transparent blueprint for applicants to aspire to and selection panels to measure against. All specialties have a multi-station Selection Centre, rather than the old style traditional panel, with each station addressing a specific aspect of the Person Specification. Performances are measured using well defined criteria and score sheets. Interviewers are trained to maximise fairness and the whole process is Quality Assured by lay and professional assessors. Advanced statistical techniques are used to monitor fairness and effectiveness. A technique known as Rasch analysis has been introduced in General Surgery to make the process even fairer by evening out differences between “hawk and dove” interviewers and between easy and difficult interview scenarios. Approval ratings from applicants and interviewers are consistently high.
How do current selection processes compare with the past? Where do we stand now? What about the future?
The current system compresses what might have been years of assessment in the past into two or three hours, but it aims to do it fairly, consistently and with transparency. All vestiges of nepotism have been removed but in doing so, what was occasionally a valuable source of information – local, honest, informative and fair knowledge – has been lost. Nevertheless increased fairness, a structured approach, quality assurance and formal analysis have, in my view, created a process which is immeasurably better than what we had in the past.
Specialties can learn from each other and the current annual JCST Selection Leads meeting is a valuable forum to share ideas. Should we go further? Some skills appear in the Person Specifications of all surgical specialties and so could, in theory, be assessed in a common selection process which would be supplemented with shorter specialty specific stations. General Surgery and Vascular Surgery share their entire selection processes, but should we extend this example to other specialties? I can see advantages and disadvantages and would be interested in hearing others’ views.
We now need to assess how selected trainees progress through training and how they perform as consultants. We also need to learn from those who are not selected – how do their careers develop and how do they perform as clinicians? By comparing eventual performance quality between selected and unselected applicants we can further refine selection and identify criteria which predict future success.
This of course, opens up a whole new aspect – that of how should “success” in a clinical career be measured? But maybe that’s the subject of another blog....
Gareth Griffiths, Chair, SAC in General Surgery and incoming ISCP Surgical Director