Skip to content

On probability, statistics and journalism

heredity and cancer, breast cancer, inherited ...

Image via Wikipedia

This should pose a good teaser for any working reporter and news editor who uses percentages of chance on a day-to-day basis.

If a woman is given a positive screening result after a mammogram, which is bad, what is the probability that she does not have cancer?

Answer: 91 percent.

Why is this? Cambridge University professor David Spiegelhalter explains in the September UK edition of Wired (emphasis is mine):

Mammography correctly classifies around 90 percent of women who go for breast-cancer screening. So when a middle-aged woman is told she has a positive test result, what’s the probability she doesn’t have cancer? The answer, which is surprising to most people, is around 91 per cent. The crucial missing piece of information is the size of her background risk.

So suppose she is from a population in which around one in 100 have breast cancer. Then, out of 100 such women tested, one would have breast cancer and will most likely test positive. But of the 99 who do not have breast cancer, we would still expect around ten to test positive — as the test is only 90 percent accurate. That makes 11 positive tests, only one of which involves cancer, which gives a 10/11 = 91 percent probability that someone who tests positive does not have breast cancer.

Spiegelhalter writes that this is difficult to understand because it is difficult to understand: probability doesn’t make sense, nor follow the rule of logic we think govern our lives and the outcomes of decisions.

But he also correctly identifies a flaw in news reporting where the numerator – the amount of things - is enthusiastically reported, without mentioning the amount of times the event could have happened, the denominator.

So the amount of health scare stories (mentioning no names) that are perpetually reported – invariably from unpublished, unreviewed studies and promoted by PR officers – are lacking in context and thus utterly misleading.  Ben Goldacre has been making this point for years.

Health scare stories may be right to mention that while the relative risk of, for example, getting cancer from drinking/not drinking red wine/eating peanuts/reading the Daily Mail may be increased or decreased. But the absolute risk may be statistically unchanged – when a person is considered as part of a wider population, not just the 1,000 or so that took part in the study.

News naturally focuses on the unlikely and the shocking – it would be boring otherwise. But that doesn’t necessarily mean it has to be misleading.

Enhanced by Zemanta