Never trust an expert! Ever wondered why so much health advice is contradictory? It's because two-thirds of medical research is wrong or fraudulent
By
David H FreedmanLast updated at 3:42 AM on 6th July 2010
Have you been left confused by expert health advice? Even people like me, with years of experience in science and medical journalism, are left scratching our heads when research is contradicted by other studies or turns out to be wrong.
In early 2008, new guidelines for life-saving emergency heart attack treatment said you should no longer bother with the 'mouth-to-mouth' part of CPR (cardiopulmonary resuscitation). Instead, you should pump the chest non-stop.
Having got my Red Cross certificate some years ago, I wanted to know more - but discovered that while this change was endorsed by the European Resuscitation Council, the Red Cross still trains people to give mouth-to-mouth.
Testing, testing: Medical research can often result in incorrect conclusions
So I asked Paul Schwerdt, a cardiac resuscitation expert who restarts hearts daily. He told me to forget about CPR, because even trained laypeople rarely do it well enough to make a difference.
He said the best thing is an Automated External Defibrillator - a portable, easy-to-use device that is increasingly available in public places.
I found an article that said it can raise the survival rate for people having heart attacks outside hospital from 1 per cent to 80 per cent. But then I read another study saying such devices don't increase survival compared with CPR.
Little wonder that 'expert' health research leaves many of us confused - and that includes medics, too.
John Ioannidis, a doctor specialising in infectious diseases who is also a medical research analyst, has looked at hundreds of studies and discovered that two in every three conclusions published in medical journals are later found to be wrong.
The problem is that those are the sorts of conclusion your doctor reads when deciding if it makes sense to prescribe an antibiotic for your child's ear infection, or if the benefits outweigh the risks in suggesting that middle-aged men take a small daily dose of aspirin.
The two-out-of-three wrongness rate Professor Ioannidis found could be worse: he examined only the less than one- tenth of 1 per cent of research that makes it to prestigious journals. So, what is going on?
Here are some of the reasons why experts get it so wrong:
RESEARCHERS MAKE UP FINDINGS The research community likes to say that the high-profile cases of fraud we see in the media - such as the South Korean researcher Woo Suk Hwang's fake claims to have cloned human stem cells in 2005 - are rare events.
Another notorious example was that of the cancer researcher William Summerlin, who won praise for achieving skin grafts on genetically incompatible black and white mice.
In fact, he had used a marker pen to blacken patches of fur on white mice. But research fraud appears to be rife.
In an anonymous survey of 3,200 medical researchers in the journal Nature, a third confessed to at least one fraudulent act or 'massaging' research results.
In a similar survey, half the research workers said they knew of studies that involved fraud.
The proportion that are caught is minuscule. What motivates such surprising levels of dishonesty?
The answer is simple: researchers need to keep on publishing impressive findings in scientific journals in order to keep their professional careers alive, and some seem unable to come up with them through honest work.
THEY FIDDLE THE RESULTS Highly respected scientists toss out data all the time. They pretty much have to. It would be hard to justify keeping 'findings' when a key piece of equipment is faulty or if patients in studies are caught not sticking to their drug or diet regimens.
The problem is that it's not always clear where to draw the line between data that is bad and data that the researcher just doesn't like.
Douglas Altman, who directs the Centre for Statistics in Medicine in Oxford, examined more than 100 drug studies, comparing raw data and published results.
He found that in most studies some data was left out - and more often than not it didn't fit the conclusions and might raise difficult questions.
The ultimate form of data cleansing is throwing away a whole study's worth of information by not submitting it for publication because the results aren't the ones hoped for.
Often, these 'lost' negative results are from studies funded by drug companies - if you are trying to get a medicine onto the market, you don't want to publish research that makes it look bad.
A study two years ago revealed that 23 out of 74 antidepressant trials were not published.
All but one had found the drugs to be more or less ineffective compared with a sugar pill placebo.
In contrast, all 37 positive studies were published.
THEY STUDY THE WRONG PATIENTS
The reason trials may prove untrustworthy is because they study the wrong people. A study might be virtuous about its results, except it was assessing a drug's effects on the wrong people - those who do not represent the patients who would need the drug.
Sometimes people in medical studies are particularly health conscious or unusually ill. Then there is the fact that many studies pay you to take part, which results in a high percentages of poor people, and sometimes alcoholics, drug misusers and the homeless. These sway the results.
Studies in the Nineties appeared to prove hormone replacement therapy (HRT) reduced the risk of heart disease by 50 per cent. Then a large study in 2002 seemed to prove HRT increased the heart disease risk by 29 per cent.
Why the huge discrepancy? It turned out the groups had significantly different balances of people: the first had relatively young women, the second older women, leading both to produce misleading results.
THEY MOVE THE GOALPOSTS
Sheer chance means that in a medical or psychological study, you will always see improvement in a group of people over time - a slight loss in excess weight, for instance.
That change needn't have anything to do with what is being tested, but the researcher can then claim it was due to whatever was being tested by writing up the study as if that change was what was being tested for.
'It's like throwing darts on a wall and then drawing a dartboard around them,' says Douglas Altman.
He has compared study proposals submitted by researchers with the published findings: 'We found the stated focus of research was different in more than half the cases.'
In other words, half the results were flukes that had been turned into alleged scientific fact.
THEY STUDY THE WRONG MAMMAL
In a notorious incident four years ago at Northwick Park Hospital, Middlesex, an experimental leukaemia drug was given to six volunteers.
They all quickly fell seriously ill. The drug had been safety-tested beforehand and passed with flying colours. But it had been safety- ested on animals, where it had shown no harmful effects, even at doses up to 500 times higher than those given to the volunteers.
Health research has become dependent on animals. Treatment breakthroughs you see in the media frequently turn out to be based on studies of mice. But often the results don't translate to humans.
Three-quarters of drugs fail human trials because of dangerous side-effects or simply failing to provide cures.
Adapted from Wrong: Why Experts Keep Failing Us And How To Know When Not To Trust Them by David H. Freedman (Little, Brown, £12.99). To order a copy (P&P free), call 0845 155 0720.
Good article isnt it - once again I'm being lazy by using others' work - but when that work is well written and researched and I give the credit where due - why not?