There’s a new study out on mammography with important implications for breast cancer screening. The main result is that when radiologists review more mammograms per year, the rate of false positives declines.
The stated purpose of the research*, published in the journal Radiology, was to see how radiologists’ interpretive volume — essentially the number of mammograms read per year — affects their performance in breast cancer screening. The investigators collected data from six registries participating in the NCI’s Breast Cancer Surveillance Consortium, involving 120 radiologists who interpreted 783,965 screening mammograms from 2002 to 2006. So it was a big study, at least in terms of the number of images and outcomes assessed.
First — and before reaching any conclusions — the variance among seasoned radiologists’ everyday experience reading mammograms is striking. From the paper:
…We studied 120 radiologists with a median age of 54 years (range, 37–74 years); most worked full time (75%), had 20 or more years of experience (53%), and had no fellowship training in breast imaging (92%). Time spent in breast imaging varied, with 26% of radiologists working less than 20% and 33% working 80%–100% of their time in breast imaging. Most (61%) interpreted 1000–2999 mammograms annually, with 9% interpreting 5000 or more mammograms.
So they’re looking at a diverse bunch of radiologists reading mammograms, as young as 37 and as old as 74, most with no extra training in the subspecialty. The fraction of work effort spent on breast imaging –presumably mammography, sonos and MRIs — ranged from a quarter of the group (26 percent) who spend less than a fifth of their time on it and a third (33 percent) who spend almost all of their time on breast imaging studies.
The investigators summarize their findings in the abstract:
The mean false-positive rate was 9.1% (95% CI: 8.1%, 10.1%), with rates significantly higher for radiologists who had the lowest total (P = .008) and screening (P = .015) volumes. Radiologists with low diagnostic volume (P = .004 and P = .008) and a greater screening focus (P = .003 and P = .002) had significantly lower false-positive and cancer detection rates, respectively. Median invasive tumor size and proportion of cancers detected at early stages did not vary by volume.
This means is that radiologists who review more mammograms are better at reading them correctly. The main difference is that they are less likely to call a false positive. Their work is otherwise comparable, mainly in terms of cancers identified.**
Why this matters is because the costs of false positives — emotional (which I have argued shouldn’t matter so much), physical (surgery, complications of surgery, scars) and financial (costs of biopsies and surgery) are said to be the main problem with breast cancer screening by mammography. If we can reduce the false positive rate, BC screening becomes more efficient and safer.
Time provides the only major press coverage I found on this study, and suggests the findings may be counter-intuitive. I guess the notion is that radiologists might tire of reading so many films, or that a higher volume of work is inherently detrimental.
But I wasn’t at all surprised, nor do I find the results counter-intuitive: The more time a medical specialist spends doing the same sort of work — say examining blood cells under the microscope, as I used to do, routinely — the more likely that doctor will know the difference between a benign variant and a likely sign of malignancy.
Experience is very valuable in medicine, and with so much emphasis now on primary care, I’m afraid we’re forgetting about the value of expertise.
Finally, the authors point to the potential problem of inaccessibility of specialized radiologists — an argument against greater requirements, in terms of the number of mammograms a radiologist needs to read per year to be deemed qualified by the FDA and MQSA. The point is that in some rural areas, women wouldn’t have access to mammography if there’s more stringency on radiologists’ volume. But I don’t see this accessibility problem as a valid issue. If the images were all digital, the doctor’s location shouldn’t matter at all.
*The work, put forth by the Group Health Research Institute and involving a broad range or investigators including biostatisticians, public health specialists, radiologists from institutions across the U.S., received significant funding from the ACS, the Longaberger Company’s Horizon of Hope Campaign, the Breast Cancer Stamp Fund, the Agency for Healthcare Research and Quality (AHRQ) and the NCI.
**I recommend a read of the full paper and in particular the discussion section, if you can access it through a library or elsewhere. It’s fairly long, and includes some nuanced findings I could not fully cover here.
*This blog post was originally published at Medical Lessons*