Lessons from Medical Education – the ‘Standardised Patient’ to the ‘Standardised Person’ – Transferrable Principles for Assessment Centres.

The ‘standardised patient methodology’ has a rich and well documented history in Medical Education, which is global, and spans some 4 decades of practice and research. The published literature is substantial, comprising reflections, practice tips and psychometric data that validate ‘role play/simulation’ methodologies and offer reassurance about robustness, fairness, and credibility in summative (pass/fail)  assessement and recruitment and selection scenarios.

The standardised patient methodology is widely used in the training and assessment of undergraduate health professionals, and features routinely in high stakes recruitment to postgraduate training posts and for membership of the Royal Colleges for medical disciplines. The stakes here are high. Public safety is the first and foremost concern, closely aligned with the requirements of the General Medical Council and other national clinical regulatory bodies, so only methods that withstand the highest level of scrutiny, supported by hard evidence, are adopted.

There are obvious parallels in non-healthcare sectors where simulation (also called role play) has for many years been used as a recruitment or testing strategy, often – but not always – associated with an Assessment Centre. The basic method is the same as for healthcare – an individual trained to portray a client, stakeholder, or colleague is present in a controlled, observed environment working to a stretching (but plausible) scenario brief to see how a candidate or employee responds in an actual interpersonal scenario with another human being.

It is well known in all sectors that ‘intent to communicate’, that it to say a written account of what one would do in a sensitive situation, or a verbal description for an interview panel of how one would hope to respond, doesn’t necessarily translate into the context of a live, spontaneous, situation. Simulation, as we know, avoids ‘rote’ (learned) answers that the candidate can’t actualise, and creates a rich context for observation of not just skills, but professional identity and attitudes.

The word ‘standardised’ appears more routinely in medical literature than corporate or management accounts. The principles are similar, and may simply be termed differently, but there is interest in taking a brief look at what can be confirmed from the medical experience, given that this body of research isn’t readily accessible across sectors.

‘Standardisation’ works on the fundamental premise that all candidates in any test should have the same opportunity to do well. Professional medical role players working on assessment programmes are trained to present this equality of experience, while at the same time responding flexibly to the candidate’s own input in order for skilled individuals to be rewarded, eg by giving a fuller answer to a thoughtful or well considered question. This will be familiar in some management/leadership sectors. Role players working as ‘standardised patients’ for testing purposes ideally undergo similar training to observing clinical examiners, and if they have scoring responsibility, either numerically or in terms of recommendation, should have been calibrated against outcomes in advance of the test using mock examples.

The core components of a standardised approach for a role player acting as a standardised patient are:

  • That all role players taking part in ANY testing are experienced professionals with credentials in simulation. Drafting in untrained personnel, or ‘actors’ inexperienced in education is a risk, and could prove indefensible were an outcome to be challenged.
  • That all role players (or whatever terminology is used) are, specifically, assessment trained, and understand basic assessment theory.
  • That they are briefed as a group to be consistent, that is to say to present the same history, features, desires, knowledge, prior knowledge, intellectual capacity, personal circumstances etc as each other.
  • That this consistency applies across parallel centres, on different days, and at different times of day. Scenarios cannot, and must not, ‘grow’ during a protracted process.
  • That degrees of prompting (how much ‘help’ can be offered to a struggling candidate, or a well intentioned one who has gone off track from the scored domains) are discussed in advance, pre-agreed, and consistent.
  • That the role players are familiar, though experience of teaching/training in the sector or through thorough pre-preparation, with the outcomes of the test and what is and is not expected of a candidate at this level.
  • That any relevant semi-scripted inclusions are fed in as naturally as possible if they don’t occur in the conversation by chance.
  • That they can respond flexibly in role, so that while the initial presentation is the same for each candidate (same starting point) the subsequent interaction appropriately reflects the skill, handling, behaviour and degree of trust engendered by each candidate.
  • That the role player has been trained to offer objective evidenced feedback that is specific to the impact of a behaviour, or approach, to inform the feedback process.
  • That any role player involved in making a scoring recommendation has been adequately prepared as an assessor. This includes the way written evidence is recorded, given that candidates have the right to access all data pertaining to them, retrospectively.
  • That role players taking part in scoring have undergone the same mandatory pre-preparation as the observing clinical assessors, which might include confidentiality agreements, equality and diversity training, etc (as specific to the organisational body managing the programme).
  • That role players are themselves, appropriately professionalised – i.e. able to work in partnership with the client, demonstrate gravitas in their approach, be engaged with the overall process (logistically and intellectually) and demonstrate themselves the communication and attitudinal attributes consummate with a high stakes setting.
  • That data relating to score outputs in summative assessment are analysed by an independent statistician, to ensure the internal consistency of the test, identify any ‘hawk’ or ‘dove’ outliers, and ensure that the overall process is robust.

In Medicine there are standardised patients. In other sectors there are ‘standardised people’, covering a wide range of situations, and needs.

Testing is not, and never will be, a perfect replication of “authenticity”. Nor, arguably, should it be. Few drivers will ever replicate the route or back-to-back manoeuvres taken on their driving test in a daily commute, but we accept that it’s a not unreasonable test of skill and safety, that nerves may affect performance, and that random tailing of a novice unseen would not only be logistically challenging, but create disparity in that some would have ‘easier’ routes than others, and that key safety items could be omitted.

That’s an obviously blunt example, but the same is true in medical standardisation. Whether the role player be simulating clinical/physical signs for a technical skills assessment, or working on a communication orientated scenario, the use of a trained lay individual rather than an ‘actual’ patient adds a level of fairness through consistency that drafting in patients, who by nature are ‘unwell’, would not.

One could, arguably, recruit 10 patients with high blood pressure, or 10 patients with poorly controlled diabetes for ‘exam days’ but their individual experiences of the illness, exact clinical signs, level of concentration/wellness and so on would introduce a level of variability that the standardised patient – in so far as is feasible – eliminates. The entire history, symptoms, mood, expectation, past experience can be programmed to optimise a consistent, but human, experience across days and sites working in parallel.

I am just one practitioner with testing history in both medical and management assessment, and as such ackoweldge that the experience of others may differ. Not withstanding some very good examples, from my observations lessons for management centres to be learned from Medicine relate to application of the evidence base (dating from Harden R in the 70s), the continuous generation of statistical data looking at score outcomes (by date, gender, ethnicity, experience and many other variables), the focus on consistency of presentation, and agreed prompting, role player training, their previous embedment in medical teaching, and the advance validation of briefs. Any individual partaking in the assessment of another should be, specifically, assessment trained, and understand the principles bulleted above.

We’d expect no less if our own promotion, professional qualification, entry to a profession, or examination was on the line. To do less is a risk, and when booking any role players for a testing event received wisdom is to check their credentials as assessors, not as standard role players or ‘actors’. If such training is lacking, provide it, as candidates have the right to challenge outcomes. As well as being fair on the day to simulators and candidates, all assessments in an ideal world should be transparent and defensible.

Dr Connie Wiskin