Testing drugs on animals, both to ensure safety and to assess efficacy, is the essential precursor to testing compounds in humans. But disturbing new evidence brings into question their use in the development of treatments for neurological conditions such as stroke.
The CAMARADES collaboration
CAMARADES is an international collaboration involving School of Hygiene and Tropical Medicine, in the UK, the National Stroke Research Institute in Melbourne, Australia, Humboldt University in Berlin, Germany, University Medical Centre, Utrecht, the Netherlands. The aim is to provide robust data to inform improvements in the design, conduct, reporting and use of animal data in the development of stroke drugs. Further information is available at www.camarades.info. Background publications are available here.
Recently, one drug that showed efficacy in animal tests, NXY-059 was tested by its developer AstraZeneca plc, in a worldwide clinical trial involving over 3,000 patients. Although there was a response rate of over 44 per cent in animal tests, the results showed there were no discernable effects in humans.
“So either the animal studies for NXY-059 were misleading, or the clinical trials weren’t sensitive enough, or animal experiments aren’t a good model of human disease,” Malcolm Macleod, Consultant Neurologist at Stirling Royal Infirmary told the British Association meeting at York University, UK, last week.
Macleod is heading up Collaborative Approach to Meta-analysis and Review of Animal Data in Experimental Stroke (CAMARADES), an international collaboration, which is testing each of these potential explanations.
Researchers’ influence
Macleod noted that it’s now well known that, consciously or subconsciously, researchers can influence the results of experiments. Where two groups of test animals are being compared, subjects might be allocated to those groups differently; the groups might be treated differently (for instance given a less severe stroke); and the analysis of outcome might be biased by expectations of what a drug should do.
These problems are overcome if the allocation to experimental group is made randomly, and if that allocation is concealed from the researchers that induce the stroke and assess the outcome.
Macleod and colleagues studied the animal data for NXY-059 to see if there were different results between trials that attempted to remove researcher bias in this way, and those that did not.
“Overall, NXY-059 improved outcome by 44 per cent,” said Macleod. “However, when we looked in more detail, disturbing patterns appeared.”
Studies which did not randomise animals said NXY-059 improved outcome by more than 50 per cent; those which did not estimated the effect at only 20 per cent.
In studies which did not conceal treatment allocation from the researcher inducing the stroke, NXY-059 improved outcome by more than 50 per cent. For those where the researcher was blind to treatment allocation the effect was only 25 per cent.
Similarly, studies which did not blind the assessment of outcome indicated NXY-059 improved outcome by almost 50 per cent; those that were blinded estimated the effect at less than 30 per cent.
Similar findings
Nor are the CAMARADES findings limited to NXY-059. Macleod and his colleagues have reported similar findings for other candidate drugs. “But what is disturbing about the data for NXY-059 is that, for a drug where most of the published work was funded by the drug’s manufacturer, the impact of poor study quality was much more pronounced,” he said.
Macleod has a further concern about the quality of data for NXY-059 and other stroke treatments, in that the number of animals used in individual experiments was generally too small to allow a precise estimate of outcome. “In the long run this means more animals have to be used, as scientists attempt to replicate the imprecise results of other studies,” said Macleod.
What is especially disturbing about these conclusions is that the animal models of stroke are thought to closely mimic the pathology in humans. This could call into question the validity of animal tests where the disease models are known are less precise.
Beyond the safety and costs issues raised by the disparity between results from animal tests and human trials, there are implications for animal welfare. Macleod says that at a “highly conservative” estimate at least 250,000 animals have been used in stroke trials over the past 20 years.
Based on the CAMARADES findings Macleod argues there is significant scope for improvements in the design, conduct, analysis and reporting of animal experiments. By minimising bias, such improvements would improve the amount of valid information gained from those animals used.
And a precise and robust overview of existing data, based on systematic review and meta-analysis, would pinpoint the precise areas in which further experiments should focus, ensuring that unnecessary replication did not occur.
The need for such assessments was reflected in a recent report from the UK’s Nuffield Council on Bioethics, The Ethics of Research Involving Animals, which noted the number of useful systematic reviews and meta-reviews that address the question of the scientific validity of animal experiments and tests.
“In principle, it would therefore be desirable to undertake further systematic reviews and meta-analyses to evaluate more fully the predictability and transferability of animal models,” says the report.