Tuesday, June 12, 2012

The effects of "score creep". Trends in residency selection criteria

Every other year the National Resident Matching Program (NRMP) publishes a document called Charting Outcomes in the Match. This is accessible through http://www.nrmp.org/data/index.html. The document analyzes many datapoints of interest to medical students, medical schools, and residency programs. For example, many senior medical students look at the average step 1 and step 2 scores, number of research experiences, and number of programs needed to rank to feel comfortable matching. These indicators are all reported by individual specialty. 

A discussion on SDN sparked an interesting debate and analysis about the results of score creep on MD/PhDs. 

As I have written previously, clinical indicators of performance such as step 1 and step 2 scores are those most important for obtaining a residency position. Score creep is the effect that many MD/PhDs observe during their 7-9 year MD/PhD programs, where residencies seem to get more competitive each year through the long program.

To evaluate whether this "score creep" truly exists, I performed an analysis of markers of competition within Charting Outcomes in the Match over the time period since the beginning of Charting Outcomes analysis of 2005 residency match data to the most recent publication in 2011. I found dramatic evidence for this effect, as shown below.

After discussion with one of several program faculty who posts frequently on SDN, we decided to write a manuscript and publish this data to demonstrate two things. First, we wanted to show that residencies are overall becoming more competitive over time. This is likely due to expansion of medical schools and class sizes, especially osteopathic, with a relatively unchanged number of residency positions. I also personally believe that almost everyone believes step 1 is the single most important factor in residency selection. Thus, Step 1 specific preparation seems to increase every year--with regards to increased enrollment in question banks and other formal review courses, amount of time allotted by the medical schools to allow students to take off to study for the exam, and cirricula revisions to focus more on Step 1 material. I think this creates an artificial distraction from the true goals of medical education for the single purpose of creating a benchmark of competition that most would argue has little reflection of a medical student's future potential as a physician.

Second, our editorial position is that the "score creep" is a problem particularly for MD/PhDs. That is, MD/PhDs take the step 1 exam after second year yet compete with medical students for residency after a four year delay (the PhD). Thus, the MD/PhD student's score may not be as impressive when they graduate. Further, advising at the time of taking the step 1 may not be current by the time of graduation. For example, our MD/PhD program director frequently told us that step 1 score was unimportant as long as we passed. As I have written about previously, this is completely untrue. But this probably was reasonably true until about 10 years ago.

I submitted the following manuscript to three journals and had little luck with it. The first journal returned it without review. The second diplomatically declined to publish it as it was not felt to be relevant to residency programs. The third journal took six months to send back a review that was so off-topic I think they may have sent me the review to someone else's manuscript. Still, I think this data is important and relevant to the pre-medical community. It is self-published below.

There are a few benefits to self-publication. First, I can put all the figures in color. Second, I made a supplemental section with additional figures to show all of the data from Charting Outcomes for matched US seniors. See the very bottom for the supplement.