As someone who firmly believes in evidence and empirical data, I’ve often wondered if anyone has studied and analyzed data about medical school practices. As students, we often just assume that the policies and schemes put forth by our schools are ones that have some demonstrable merit. For example, I’ve always thought that our evaluations for third-year clerkships were very subjective. Yes, there is a scale that the evaluators must follow, but has anyone ever looked at the variation of evaluations by clerkship sites, or by individual evaluators?
I recently came across a study done at the Alpert Medical School that looked at third-year evaluations. This was done retrospectively for a year duration. There were greater than 4,000 evaluations done by ~830 evaluators of 155 medical students. The data extracted included the overall evaluation as well as the evaluator gender, age, level of training, department and student data as well.
The results were startling. The authors discovered that females were more likely to receive a better grade than their male counterparts and female evaluators awarded lower grades than male evaluators. The authors adjusted for department, observation time as well as age and level of training. Male evaluator grading was statistically insignificant by medical student gender.
These results need to be taken with a grain of salt. After all, this only looked at one year. Maybe this trend would disappear when more years were added. At the very least, I think that this is an area that needs further investigation especially if it involves things as significant as grades. After all, this could impact residency prospects!