Heavy mobile phone use may be linked to an increased risk of cancer of the salivary gland, a study suggests.That sounds frightening but, buried deep in the story, the BBC writes that a bigger study contradicts it:
Despite these latest findings, the largest and longest-running investigation ever to be carried out into mobile phone usage found no increased risk of any sort of cancer.One important point about all such studies that scientists know, but the news media doesn't explain, is that the results of statistical studies are statistical, that is, somewhat random. The usual scientific standard is to report results at the 95% confidence level. Now, suppose, for the moment, that there is absolutely no connection between cell phones and cancer. Then, perform tests for the correlations between 20 different types of cancer and cell phone usage. At the 95% confidence level, odds are that one of those twenty test results will be wrong. (This is because 1 out of 20 is 5% and 5% is the expected error rate on tests done to 95% confidence.) To be clear, that one wrong result does not mean that the scientist did anything wrong: As scientists well know, it is simply the nature of statistical studies. What happens next, of course, is that the news media will write a headline based on that one result.
It followed 420,000 people in Denmark, some of whom had been using a mobile phone for as long as ten years.
There was in fact a lower incidence of cancer than expected in a group of that size, suggesting mobile phones had no impact on the development of tumours.
Next, that scientific group or another does another, say, 20 tests for correlation. Still assuming that there really is no connection, only (about) one of those tests will be positive for a connection. It will likely, by random chance, show a connection, though, to a different cancer. That new (wrong) result will get another headline in the news media.
So how do you tell the difference between valid and incorrect results in such studies? It is simple: if several studies show a link to the same cancer, the result is likely valid. If successive studies show links to different cancers, then each of those results is likely to be merely a statistical fluke. Since, so far, successive studies give different results on cell phone and cancer, as in the two studies mentioned by the BBC above, the reasonable conclusion is that we are looking at statistical flukes.
The second key issue with these studies is that they find only correlations but not causality. The Israeli study, for example, found that the increased cancer risk was associated with heavy cell phone users who live in rural areas. It could be, if the statistical result is true, that the salivary gland tumors have nothing to do with cell phone usage and are instead caused by exposure to, say, cow manure or other chemicals common in rural areas where, maybe, landlines are absent so cell phone use is more common. Issues like that could be sorted out by future statistical studies. (It is likely that, as good scientists, Dr. Sadetzki or her colleagues are already working on such kinds of issues.)
So, for the time being, there is no reason to think that tinfoil hats are necessary.