Stories that link diseases to their possible causes are popular, and often generate humour, bemusement, and skepticism. Readers assume that today’s health hazards will be tomorrow’s health saviours. Rod Liddle’s headline in the Sunday Times is an example: “Toasties get you laid, fat prevents dementia and I’m a sex god.” Liddle starts with some fun statistics showing that those who ate cheese toasties had more enjoyable sex than those who did not. He then moves to a serious study showing that being overweight was associated with a lower risk of dementia. His contention, that it is hard to take such research seriously when contradictory results are so common, rings true. One of the latest controversies is that saturated fats, and particularly dairy fats, may have been undeservedly targeted in nutrition guidelines.
Other controversies in recent years include the role of hormone replacement therapy in women for the avoidance of cancers and heart disease, testosterone use in men, statins’ side effects, whether alcohol is good or bad for the heart, and the use of vitamin supplements, most recently around vitamin D. Alongside these, however, we should recall many similar settled issues that went through equally turbulent times, such as the effects of smoking including passive exposure, the laying of infants on their backs to prevent cot death, and the use of seat belts to reduce death and injury from road traffic accidents.
So why do medical sciences get it wrong so often? And what is the distinguishing feature of ideas that turn out to be right? Fundamentally, we get it wrong for three reasons. First, the causes of diseases are complex and so are the effects of factors that cause disease. Second, the scientific process is based on getting it wrong most of the time. Third, often our concepts and methods are incomplete and too blunt for the challenge.
The causes of diseases are usually mysterious, and mysteries generate many explanations. The cause of cholera epidemics was a mystery in the mid-19th century and the founder of epidemiology, Dr John Snow, died long before his evidence that the causal route involved drinking contaminated water was accepted. It seems incredible that Snow’s evidence was rejected in favour of the discredited miasma theory. There was, however, a key concept missing at the time: the germ theory of disease. Without a proper understanding of microbes, people found it hard to understand how apparently clean water could cause diseases.
A triumph of 20th century medical science was proving the epidemic of lung cancer was primarily attributable to smoking tobacco. Now that we all agree, the controversies are forgotten. When large-scale investigations started in the 1940s and 1950s the cast of causal suspects was large and traffic-related pollution, including tar on roads, were amongst the leading explanations. There were decades of intense controversy before the evidence, mostly epidemiological, was accepted. The concepts underpinning the causal interpretation of data showing statistical associations between the risk factor (smoking) and outcome (lung cancer) were rudimentary and had to be articulated and agreed before we could resolve the controversies.
Science requires imagination and speculation in articulating a hypothesis, which is a succinct potential explanation of a phenomenon. The most recent hypothesis that my colleagues and I have published postulates that cooking at high heat (>150⁰ C) might be the reason why South Asians are highly susceptible to coronary heart disease including heart attacks, especially as compared to Chinese people. I will have no regrets if the hypothesis is wrong, especially as I am keen on Indian cuisine! Being wrong is part of the scientific process.
Often, our concepts and methods are insufficient for tackling unsolved problems. If not, we would solve the problems. Epidemiology usually kick-starts understanding. Epidemiology studies the differing patterns of diseases in subgroups of populations and explores the reasons for them. The evidence for differences is measured using incidence rates often expressed as the ratio of the incidence in one population compared to another (called relative risk or rate). For example, the incidence rate of CHD might be 300 per 100,000 in men per year and 150/100,000 in women per year meaning that men have twice the risk in women (relative risk = 2). This needs explanation and the starting point would be a hypothesis. Making such measurements in populations is hard and numerous errors arise because of faulty underlying data e.g. underestimated number of cases, wrong population size etc. Even if the data were accurate the explanation could be far from obvious.
Epidemiology usually kick-starts understanding.
Epidemiologists have evolved a tool-kit of study designs and data analysis methods. Of these, only one method is agreed to provide truly causal evidence – the perfect experiment, or as this is called in human research, the perfect trial. However, the ethical opportunity to test causal hypotheses in humans is rare. We could do this on animals but we’ll not be sure we can extrapolate the results to humans. Epidemiology, therefore, works mostly on non-experimental methods. This is a great strength of the discipline but it has its limitations. The biggest problem arising is called confounding. This arises when the populations being compared differ in ways relevant to the causation of the disease and not only in the way we are studying. Analysing data from observational studies to achieve causal understanding is seriously demanding. The concepts and techniques for causal interpretation of observational data have been evolving quite fast in the last 50 years but they remain insufficient. Currently, human judgement is a major ingredient in the causal path. In future, perhaps computers will help more, by objectively evaluating the evidence and setting out causal probabilities.
Meanwhile, scientists and the public, alike, will have to accept that in judging controversies such as whether eating fat prevents dementia, we will be wrong more often than right. When we are right, the benefits are usually huge. In medical sciences, many wrongs tend to eventually make a right!
Featured image credit: Cheese Toastie by Asnim Asnim. CC BY 2.0 via Unsplash.
Yes, trial and error is inherent in the path to learning and understanding. The layman’s skepticism of expert opinions is understandable. That skepticism has value also. It helps in maintaining humility.