Improving feedback from the MRCGP CSA examination
Dr Mahibur Rahman
We are often contacted by GP registrars or GP trainers requesting support with understanding the feedback from the MRCGP CSA. Many doctors have commented that they find the feedback difficult to interpret. This has been recognised as an important issue and recently a motion was passed at the LMCs conference calling for immediate improvement in the feedback from the CSA. In this article Dr Mahibur Rahman looks at the current feedback, the areas that could be improved and suggestions on ways to make the feedback clearer and more helpful for both trainers and registrars.
Understanding the current CSA feedback
Currently there are 2 main sections to the feedback from the CSA. The top part gives the candidate’s total score from all 13 cases (out of 117), with the pass mark for the date they sat the exam. This total score is based on the summative part of the assessment, which is based on 3 domains for every case: data gathering, clinical management, and interpersonal skills.
For each domain, a candidate is graded with a score attached to each grade as follows: Clear pass: 3 marks, Pass: 2 marks, Fail: 1 mark, Clear fail: 0 marks. This gives a total score for each case of between 0 and 9.
To gain a pass, a candidate must get an overall score equal to or above the pass mark for a given day. This is adjusted each day using the borderline group method to ensure the standard of the exam remains the same each day. The actual pass mark is variable with a usual range between 72 and 77 out of 117.
The second part of the feedback is formative – it relates to the 16 feedback statements provided by the RCGP in a grid. This grid can provide information on consulting areas that a candidate could improve on. It is important to understand that this part does NOT determine the score or whether a candidate has passed or failed – it is formative, and aimed at helping doctors identify areas of their consulting that they could improve. The current feedback looks like this:
What are the problems with the current feedback?
There is no breakdown of the marks awarded from each case (out of 9), and no way for a candidate or trainer to see clearly if marks were dropped in data gathering, clinical management or interpersonal skills for each case, or as a general trend over the course of the whole exam.
In some cases, the formative feedback can help identify areas to work on, but in some cases it can lead to confusion. A common source of confusion relates to the fact that candidates with the same number of crosses can have very different scores. Finally, where a candidate has no crosses relating to a specific case, many candidates think that it means they must have scored very well, or at least gained 6 or more marks out of 9. However it is impossible to tell how well or poorly they have performed in that case from the lack of crosses– they could have scored anywhere from 0 to 9. This is because:
- The formative feedback does NOT determine the score for a case – this is determined by the performance in the 3 domains being assessed. Scores for these are not provided in the current feedback as standard – candidates that want to access these scores can request their mark sheets under the Data Protection Act.
- Only feedback statements that were flagged in 2 different cases show up in the feedback provided to candidates – there are hidden crosses where a statement was only flagged in a single case. A candidate with no crosses could actually have had several crosses relating to feedback statements that did not occur again in other cases. This could have led them to score very poorly in that case, but they would not know it from looking at the feedback.
This candidate failed the CSA by a few marks – look at the formative feedback for their first 3 cases:
This candidate scored 7/9 for the first case (joint problems), and 2/9 for the second case (acute illness), but there would be no way to know that they had performed really poorly in the second case from the current feedback. There were actually 3 feedback statements that were flagged in this case, but they don’t show up because those statements did not apply to any other case (and currently these statements are hidden).
How could the feedback be improved?
The GPC motion called for “the feedback from the MRCGP exams to be improved immediately”. Here are 3 simple ways that the feedback could be made clearer and more effective in helping identify areas to work on to improve performance. They can all be introduced using data that is already collected in the exam, and so could be implemented quickly with little additional cost.
1. Provide a breakdown of total marks for each domain as well as the total score. In the AKT, candidates get a breakdown of their scores in the 3 domains (clinical medicine, organisational, and evidence interpretation). This will give a clearer indication of any weaker areas overall:
This candidate and their trainer can immediately see that they could make improvements in all parts of the consultation, but that the clinical management domain was their weakest overall. This may allow more targeted work on this part of the consultation. Without this information, this candidate (and their trainer) may focus more on the interpersonal domain, without realising that although this could be improved further, this is actually their strongest domain overall.
2. Provide the domain scores for every case as well as the formative feedback. Taking both the summative and formative feedback together provides more meaningful information and will allow easier identification of both consulting skills and curriculum areas that need improving. This could be provided by adding a separate table for the domain scores:
Looking at this, it is clear that this candidate had 2 cases where they performed very poorly – the young adult female with an acute illness, and the middle aged female with a women’s health issue. These may be areas that they struggle with, and indetifying them will allow focused improvement in knowledge.
3. Provide details in the formative feedback section of ALL statements that were flagged, even when this only applied to a single case. This will allow candidates to identify all areas that examiners felt they could work on – even candidates that have done well can benefit from knowing areas that they could improve. Combined with the summative feedback above, this would also make it easier to separate a candidate that is below the pass standard in multiple areas of multiple cases from one that had a couple of really poor cases due to poor knowledge of a specific curriculum area, or because they missed something key in that case. Here is the formative feedback from those first 3 cases that we looked at earlier; the second image shows all crosses (those that were previously hidden are shown in red for clarity):
You can see that taking this with the domain scores, it is immediately clear why this candidate got such a low score in the acute illness case, and that had they performed better in this case, they may have passed. This would also help candidates understand their performance better. From the current feedback they may think that this was one of their better cases when actually it is their worst. Providing this extra information does not give any information that will jeopardize case security, but it does provide more meaningful information for someone trying to improve.
How it would look together
All the feedback would fit onto 1 A4 page, allowing quick cross referencing between the different sections. This is how the new feedback could look in the e-portfolio.
It is clear that further research needs to be carried out to investigate the possible reasons behind the differential pass rates in different groups – however this will take time. By improving feedback immediately, we can ensure that candidates and trainers have clearer, more effective feedback. All these changes can me made using data that is already being collected, so this could be implemented quickly and with little additional cost. Hopefully this will enable more focused work on the key consultation skills that an individual doctor may need to work on to help them improve and pass the exam.
Are you a GP trainer or a GP registrar? What do you think about these ideas for improving the feedback from the CSA? Please share your thoughts!