Intended for healthcare professionals

Learning In Practice

Effect of ethnicity on performance in a final objective structured clinical examination: qualitative and quantitative study

BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7393.800 (Published 12 April 2003) Cite this as: BMJ 2003;326:800
  1. Val Wass, senior lecturer in general practice (valerie.wass{at}kcl.ac.uk)a,
  2. Celia Roberts, senior research fellowb,
  3. Ron Hoogenboom, research assistantc,
  4. Roger Jones, Wolfson professor of general practicea,
  5. Cees Van der Vleuten, professor of educationc
  1. a Department of General Practice and Primary Care, Guy's, King's, and St Thomas's School of Medicine, London SE11 6SP
  2. b Department of Education and Professional Studies, King's College, London SE11 6SP
  3. c Department of Educational Development and Research, University of Maastricht, Maastricht, Netherlands
  1. Correspondence to: V Wass
  • Accepted 20 November 2002

Abstract

Objective: To assess the effect of ethnicity on student performance in stations assessing communication skills within an objective structured clinical examination.

Design: Quantitative and qualitative study.

Setting: A final UK clinical examination consisting of a two day objective structured clinical examination with 22 stations.

Participants: 82 students from ethnic minorities and 97 white students.

Main outcome measures: Mean scores for stations (quantitative) and observations made using discourse analysis on selected communication stations (qualitative).

Results: Mean performance of students from ethnic minorities was significantly lower than that of white students for stations assessing communication skills on days 1 (67.0% (SD 6.8%) and 72.3% (7.6%); P=0.001) and 2 (65.2% (6.6%) and 69.5% (6.3%); P=0.003). No examples of overt discrimination were found in 309 video recordings. Transcriptions showed subtle differences in communication styles in some students from ethnic minorities who performed poorly. Examiners' assumptions about what is good communication may have contributed to differences in grading.

Conclusions: There was no evidence of explicit discrimination between students from ethnic minorities and white students in the objective structured clinical examination. A small group of male students from ethnic minorities used particularly poorly rated communicative styles, and some subtle problems in assessing communication skills may have introduced bias. Tests need to reflect issues of diversity to ensure that students from ethnic minorities are not disadvantaged.

What is already known on this topic

What is already known on this topic UK medical schools are concerned that students from ethnic minorities may perform less well than white students in examinations

It is important to understand whether our examination system disadvantages them

What this study adds

What this study adds Mean performance of students from ethnic minorities was significantly lower than that of white students in a final year objective structured clinical examination

Two possible reasons for the difference were poor communicative performance of a small group of male students from ethnic minorities and examiners' use of a textbook patient centred notion of good communication

Issues of diversity in test construction and implementation must be addressed to ensure that students from ethnic minorities are not disadvantaged

Introduction

Students from ethnic minorities seem to perform less well overall than white students in both undergraduate and postgraduate medical examinations.14 Any form of potential racial discrimination within our examination systems is a cause for concern. 5 6 Problems with complex discourse may disadvantage students in oral examinations who have been trained overseas, but there is little further published work on the impact of differences in ethnicity on performance in examinations.7 This is becoming an increasingly important issue in undergraduate assessment. Fairness and consistency of assessment across UK medical schools is crucial.8 We need to understand any source of potential bias that may lead to racial disadvantage when developing tests for these skills.

When looking for potential discrimination within examinations, standardisation is a key issue; the more standardised the content, the less the potential for bias. Objective structured clinical examinations are currently most often used to assess undergraduate skills and include standardised simulated scenarios to test communication skills. 9 10 Yet it is still difficult to achieve true objectivity. 11 12 However carefully designed, the scenario presented to students will vary since neither simulated patient nor student is speaking from scripts. Examiners and simulated patients make judgments based on an impression of how well the student managed the consultation. This judgment, in turn, will be informed by their assumptions about what makes an effective consultation.13

We aimed to investigate whether students from ethnic minorities are disadvantaged by a bias in marking in a final year objective structured clinical examination, with a particular focus on stations assessing communication skills.

Methods

Our study took place in June 1999 during the final MBBS examination of the then Guy's and St Thomas's medical school. This comprised a three and a half hour objective structured clinical examination conducted over two days, consisting of two stations for history taking of long cases (21 minutes each) and 20 stations (seven minutes each) for clinical examination (nine stations), communication skills (six), and practical skills (five). The stations were similar but not identical on the two days. Simulated patients were professionally trained to standardise the scenarios used on communication stations.

A different examiner marked each station against a checklist and gave a final five point global rating for overall clinical competency. Simulated patients awarded a five point global rating for overall communication skills, independent of the examiner. The examiners and simulated patients had been briefed on the procedure. A minimum competence score for each station was set in advance, using the Angoff standard setting method.14

Each day we selected two communication scenarios, using role players from different ethnic backgrounds (table 1). The students gave informed verbal consent for video recording the scenarios. The local research ethics committee approved our study. Details of the students' ethnicity were made available after the examination.

Table 1.

Communication scenarios

View this table:

The students were grouped as white, south Asian (Indian, Pakistani, Bangladeshi, Chinese, and Asian other), Afro-Caribbean, and other. For the purpose of our study, all students from ethnic minorities were categorised as one group, “ethnic minorities” (82 students) and all other candidates as “white majority” (97 students).

Quantitative analysis

We analysed the mean performance for stations for each day on all 22 stations, stations grouped by communication, practical, clinical skills, and long cases, and the specific study stations. We used an independent two sample t test to examine relations between student performance and ethnicity. We regarded P values greater than 0.01 as non-significant. The reliability of each objective structured clinical examination was calculated with Cronbach's α.

Qualitative analysis

All video recorded encounters were viewed as well as recorded comments made by simulated patients and examiners after students had left the station. The duration of the interaction, student's ethnicity, and observations made during the viewing were recorded on a standard form. An assessment was made of the extent to which simulated patients and students established a relatively patient centred encounter. Any potential misunderstandings, false assumptions, or explicit discriminatory behaviour were noted. The analyst (CR) viewed the encounters “blind,” allocating grades and comparing these after the viewing with the ratings from the simulated patients and examiners. Discrepancies between analyst, simulated patient, and examiner were recorded.

We used these records to select specific interactions for detailed transcription. We identified recurring themes, which acted as background to the detailed discourse analysis. We used these to clarify the complexity of the doctor-patient consultation and the communicative demands it placed on students.15

Results

We excluded four of the 179 students, as their ethnicity was undeclared. Table 2 gives the ethnicity, sex, and age of the remaining 175 students. Seventy eight (45%) were from ethnic minorities. All but two students had received secondary school education in the United Kingdom.

Table 2.

Analysis of ethnicity by sex and age for 175 students of known ethnicity

View this table:
Table 3.

Mean scores, standard deviations, and T and P values (independent two sample t test) for students from white majority compared with students from ethnic minorities for components of objective structured clinical examination on days 1 and 2

View this table:

Table 3 shows the mean performance of the students in the overall examination and in the specific study stations. Mean performance on communication stations was significantly higher for white majority students than for students from ethnic minorities on both day 1 (72.3% (SD 7.6%) 67.0% (6.8%); P=0.001) and day 2 (69.5% (6.3%) and 65.2% (6.6%); P=0.003). The Cronbach α reliability of the objective structured clinical examination was 0.74 and 0.76 on days 1 and 2, respectively.

We were unable to assess 49 (14%) of the video recorded interactions due to technical faults. We found no explicit examples of breakdown in communication or of discriminatory behaviour in the remaining 309 interactions. Neither simulated patients nor students showed, through talk or bodily movements, any expression identifiable as a negative response to the other's ethnicity.

For detailed discourse analysis we transcribed 28 (9%) interactions, representing a range of scores from high to low and including students from both groups. Two main findings emerged.

Firstly, students created different interactional climates. Those receiving high grades were relatively empathetic, responsive, and persuasive, building a joint problem solving framework with the patient. Conversely, some failed to build this framework, displayed various moves to distance themselves from patients, and were given low grades by both examiners and simulated patients.16 Students from both groups failed to create this interactive framework (see box and bmj.com). Relatively more male students from the ethnic minority group were in this category: Fifteen (12 male) of the 22 students scoring below minimum competence were in the ethnic minority group. In these instances there were no obvious cultural and linguistic differences, although these students were more likely to have pronunciation, word stress, and intonation influenced by their heritage language.

Framework for good and poor communicative styles

The framework for communicative style consisted of four levels:

Performance factors—these included clarity, slips of the tongue, hesitations, voice quality, and aspects of non-verbal communication

The design of questions and responses—for example, the ways in which students showed that the patient's problem needed to be jointly managed, or the ways in which students were sensitive to the needs of the patient; or by contrast the negative labelling of the student or the use of “trained” empathy

The overall thematic staging of the consultation—for example, a student who had to resist giving a methadone prescription to a drug addict managed the consultation so as not to either give in or refuse too early on; or, by contrast, a student shifting rapidly from one topic to another and preventing the simulated patient from following the student's line of reasoning

Ideological positioning of the student—for example, how much to rely on personal authority and how much on medical authority

Secondly, there were instances where the examiner gave top marks but the simulated patient from an ethnic minority gave a lower mark. These students tended to use a style in which explicit guidance was deferred, there was more talk about the nature of the consultation, and there was more talk about talk—for example, “But first I'd like to know a little more about you” (see bmj.com). Although this style fitted well with the white examiners' textbook notions of a patient centred consultation, it was not rated so highly by some simulated patients from ethnic minorities.

Discussion

We found differences in the mean performance of students from ethnic minorities on communication stations in the final year objective structured clinical examination. Although we saw no obvious evidence of breakdowns in communication or of discriminatory judgments, two subtle differences emerged: a particularly poor communicative style that may have distanced some students from ethnic minorities from the simulated patient16 and instances where the examiner's assumptions did not match the expectations of the simulated patients.

We do not want to claim too much from these two points: it would be wrong to reify ethnicity and assume problems and differences in communicative style on the basis of an individual's ethnicity alone. Similarly, our comments on the differences between examiners and the simulated patients from ethnic minorities are speculative and need further investigation. In combination, however, these two factors may account for at least some of the differences in the ratings of white students and students from ethnic minorities.

The style of some students to distance themselves from patients reflects a medical model of consultation rather than a more social one preferred by examiners. Students from ethnic minorities might be more likely than white students to use this style because a medical model is less demanding of communication skills and may be perceived as appropriate. In this case, differences in motivation and learning styles also need to be considered.17 So several complex factors, styles of communication, values, and ways of learning may all be important and may be related to the ways in which students are socialised into medical school culture.

Students who live outside the medical school and have networks of family and friends where there are quite different communicative experiences from those in the university may be less exposed to the informal and social talk around medicine occurring in institutional life. 18 19 They may therefore have less opportunity to tune into the current institutional norms about what counts as a good consultation.

There is a case for developing a wider repertoire of communicative styles for setting the stations. Important questions have been raised for educators in cross cultural communication in medicine, but this needs to be addressed at the local level—for example, by examiners working with a range of non-traditional students and simulated patients from ethnic minorities to build up a wider repertoire of styles sensitive to the diverse backgrounds of patients.13 Institutions may also not be aware of hidden processes that reward some students and penalise others in final examinations.

Acknowledgments

We thank Professor Gwyn Williams and Dr Charles Twort for approving this study, Stevo Durbaba for his technical assistance, Phil Doulton of Professional Role Play, Nora Edmead for her administrative support, and the students and examiners for their cooperation.

Footnotes

  • Funding The King's Fund.

  • Competing interests None declared.

  • Embedded Image Extracts from transcripts appear on bmj.com

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.