
Informed Health Choices Primary School Resources
A textbook and a teachers’ guide for 10 to 12-year-olds. The textbook includes a comic, exercises and classroom activities.
| 0 Comments | Evaluated
Confidence Intervals – CASP
The p-value gives no direct indication of how large or important the estimated effect size is. So, confidence intervals are often preferred.
| 0 Comments | Evaluated
Sunn Skepsis
Denne portalen er ment å gi deg som pasient råd om kvalitetskriterier for helseinformasjon og tilgang til forskningsbasert informasjon.
| 0 Comments
‘Tricks to help you get the result you want from your study (S4BE)
Inspired by a chapter in Ben Goldacre’s ‘Bad Science’, medical student Sam Marks shows you how to fiddle research results.
| 0 Comments
Reporting the findings: Absolute vs relative risk
Absolute Differences between the effects of two treatments matter more to most people than Relative Differences.
| 0 Comments
Fish oil in the Observer: the return of a $2bn friend
Ben Goldacre draws attention to people’s wish to believe that a pill can be the solution to a complicated problem.
| 0 Comments
Dodgy academic PR
Ben Goldacre: 58% of all press releases by academic institutions lacked relevant cautions and caveats about the methods and results reported
| 0 Comments
All bow before the mighty power of the nocebo effect
Ben Goldacre discusses nocebo effects, through which unpleasant symptoms are induced by negative expectations, despite no physical cause.
| 0 Comments
How do you regulate Wu?
Ben Goldacre finds that students of Chinese medicine are taught (on a science degree) that the spleen is “the root of post-heaven essence”.
| 0 Comments
Science is about embracing your knockers
Ben Goldacre: “I don’t trust claims without evidence, especially not unlikely ones about a magic cream that makes your breasts expand.”
| 0 Comments
NMT are suing Dr Wilmshurst. So how trustworthy are this company? Let’s look at their website…
Ben Goldacre celebrates Peter Wilmshurst, the doctor who blew the whistle on research misconduct in a study to which he was a contributor.
| 0 Comments
Over there! An 8 mile high distraction made of posh chocolate!
Ben Goldcare illustrates strategies used by vested interests to discredit research with ‘inconvenient’ results.
| 0 Comments
Brain imaging studies report more positive findings than their numbers can support. This is fishy.
Ben Goldacre explores how twice as many positive findings as could realistically have been expected from the data reported may have occurred
| 0 Comments
What if academics were as dumb as quacks with statistics?
Ben Goldacre introduces a statistical error that appears in about half of all the published papers in academic neuroscience research.
| 0 Comments
The strange case of the magnetic wine
Ben Goldacre shows how claims for the wine-maturing effects of magnets could be assessed with 50 people in an evening.
| 0 Comments
Screen test
Ben Goldacre notes that even if people realize that screening programmes have downsides, people don’t regret being screened.
| 0 Comments
Sampling error, the unspoken issue behind small number changes in the news
Ben Goldacre stresses the importance of taking account of “sampling variability” and confidence intervals.
| 0 Comments
The certainty of chance
Ben Goldacre reminds readers how associations may simply reflect the play of chance, and describes Deming’s illustration of this.
| 0 Comments
Publish or be damned
Ben Goldacre points out the indefensible practice of announcing conclusions from research studies which haven’t been published.
| 0 Comments
How myths are made
Ben Goldacre draws attention to Steven Greenberg’s forensically based illustration of citation biases.
| 0 Comments
Foreign substances in your precious bodily fluids
Ben Goldacre points out that there is no evidence giving strong support either to water fluoridationists or to anti-fluoridationists.
| 0 Comments
Is it okay to ignore results from people you don’t trust?
Ben Goldacre: why it’s important to consider vested interests when judging research, but not to dismiss research by people you don’t like.
| 0 Comments
Cherry picking is bad. At least warn us when you do it.
Ben Goldacre illustrates how biased ‘cherry picking’ and choosing from the relevant evidence can result in unreliable conclusions.
| 0 Comments
Why won’t Professor Susan Greenfield publish this theory in a scientific journal?
Ben Goldacre challenges senior Oxford professor to publish the evidence supporting her claim that computer games cause dementia in children.
| 0 Comments
Tipsheet for reporting on drugs, devices and medical technologies
Questions that will be familiar to reporters covering health and medicine.
| 0 Comments
Tips for understanding Intention-to-Treat analysis
Ignoring non-compliance with assigned treatments leads to biased estimates of treatment effects. ITT analysis reduces these biases.
| 0 Comments
Tips for understanding Absolute vs. Relative Risk
Absolute Differences between the effects of two treatments matter more to most people than Relative Differences.
| 0 Comments
Applying Systematic Reviews
How useful are the results of trials in a systematic review when it comes to weighing up treatment choices for particular patients?
| 0 Comments
Systematic Reviews and Meta-analysis: Information Overload
None of us can keep up with the sheer volume of material published in medical journals each week.
| 0 Comments
Combining the Results from Clinical Trials
Chris Cates notes that emphasizing the results of patients in particular sub-groups in a trial can be misleading.
| 0 Comments
Tips for understanding Non-inferiority Trials
A non-inferiority experiment endeavours to show that a new intervention is ‘not unacceptably worse’ than the comparison intervention.
| 0 Comments
Cyagen is paying for citations
Pharmaceutical company Cyagen offers researchers and other writers $100 or more for citing their products in publications.
| 0 Comments
No Power, No Evidence!
This blog explains that studies need sufficient statistical power to detect a difference between groups being compared.
| 0 Comments
Beginners guide to interpreting odds ratios, confidence intervals and p values
A tutorial on interpreting odds ratios, confidence intervals and p-values, with questions to test the reader’s knowledge of each concept.
| 0 Comments
Sample Size matters even more than you think
This blog explains why adequate sample sizes are important, and discusses research showing that sample size may affect effect size.
| 0 Comments
What is it with Odds and Risk?
This blog explains odds ratios and relative risks, and provides the formulae for calculating both measures.
| 0 Comments
Preclinical animal studies: bad experiments cost lives
This blog notes that few therapies that treat disease in animals successfully translate into effective treatments for humans.
| 0 Comments
Surrogate Endpoints in EBM: What are the benefits and dangers?
What are surrogate outcomes, their pros and cons, and why you should be cautious in extrapolating from them to clinical decisions.
| 0 Comments
The Systematic Review
This blog explains what a systematic review is, the steps involved in carrying one out, and how the review should be structured.
| 0 Comments
The Mean: Simply Average?
This blog explains ‘the mean’ as a measure of average; describes how to calculate it; and flags up some caveats.
| 0 Comments
Publication Bias: An Editorial Problem?
A blog challenging the idea that publication bias mainly occurs at editorial level, after research has been submitted for publication.
| 0 Comments
The Bias of Language
Publication of research findings in a particular language may be prompted by the nature and direction of the results.
| 0 Comments
Defining Bias
This blog explains what is meant by ‘bias’ in research, focusing particularly on attrition bias and detection bias.
| 0 Comments
Balancing Benefits and harms
A blog explaining what is meant by ‘benefits’ and ‘harms’ in the context of healthcare interventions, and the importance of balancing them.
| 0 Comments
Data Analysis Methods
A discussion of 2 approaches to data analysis in trials - ‘As Treated’, and ‘Intention-to-Treat’ - and some of the pros and cons of each.
| 0 Comments
Defining Risk
This blog defines ‘risk’ in relation to health, and discusses some the difficulties in applying estimates of risk to a given individual.
| 0 Comments
Traditional Reviews vs. Systematic Reviews
This blog outlines 11 differences between systematic and traditional reviews, and why systematic reviews are preferable.
| 0 Comments
P Value in Plain English
Using simple terms and examples, this blog explains what p-values mean in the context of testing hypotheses in research.
| 0 Comments
Cancer Screening Debate
This blog discusses problems that can be associated with cancer screening, including over-diagnosis and thus (unnecessary) over-treatment.
| 0 Comments
Surrogate endpoints: pitfalls of easier questions
A blog explaining what surrogate endpoints are and why they should be interpreted cautiously.
| 0 Comments
Misconceptions about screening
Screening should not be for everyone or all diseases. It should only be offered when it is likely to do good than harm.
| 0 Comments
Making sense of randomized trials
A description of how clinical trials are constructed and analysed to ensure they provide fair comparisons of treatments.
| 0 Comments
Clinical Significance – CASP
To understand results of a trial it is important to understand the question it was asking.
| 0 Comments
Statistical Significance – CASP
In a well-conducted randomized trial, the groups being compared should differ from each other only by chance and by the treatment received.
| 0 Comments
P Values – CASP
Statistical significance is usually assessed by appeal to a p-value, a probability, which can take any value between 0 and 1 (certain).
| 0 Comments
Making sense of results – CASP
This module introduces the key concepts required to make sense of statistical information presented in research papers.
| 0 Comments
Screening – CASP
This module on screening has been designed to help people evaluate screening programmes.
| 0 Comments

Randomised Control Trials – CASP
This module looks at the critical appraisal of randomised trials.
| 0 Comments
Common Sources of Bias
Bias (the conscious or unconscious influencing of a study and its results) can occur in different ways and renders studies less dependable.
| 0 Comments
Introduction to JLL Explanatory Essays
Professionals sometimes harm patients by using inadequately evaluated treatments. Research addressing uncertainties can reduce this harm.
| 0 Comments
Surrogate markers may not tell the whole story
A webpage explaining the limitations of using surrogate outcome markers in clinical research.
| 0 Comments
Testing Treatments
Testing Treatments is a book to help the public understand why fair tests of treatments are needed, what they are, and how to use them.
| 0 Comments
Mixed Messages about Statistical Significance
A webpage explaining the difference between statistical and practical significance.
| 0 Comments
Science fact or fiction? Making sense of cancer stories
A Cancer Research UK blog, explaining how to assess the quality of health claims about cancer.
| 0 Comments
Double blind studies
A webpage discussing the importance of blinding trial participants and researchers to intervention allocation.
| 0 Comments
CEBM – Study Designs
A short article explaining the relative strengths and weaknesses of different types of study design for assessing treatment effects.
| 0 Comments
DISCERN online
A questionnaire providing a valid and reliable way of assessing the quality of written information on treatment choices.
| 0 Comments
7 words (and more) you shouldn’t use in medical news
A webpage explaining that dramatic effects of medical treatments are very rare.
| 0 Comments
Observational Studies – does the language fit the evidence?
A webpage explaining observational studies and their advantage and disadvantages.
| 0 Comments
CASP: making sense of evidence
The Critical Appraisal Skills Programme (CASP) website with resources for teaching critical appraisal.
| 0 Comments
Relative or absolute measures of effects
Dr Chris Cates' article explaining absolute and relative effects of treatment effects.
| 0 Comments
Evidence from Randomised Trials and Systematic Reviews
Dr Chris Cates' article discussing control of bias in randomised trials and explaining systematic reviews.
| 0 Comments
The perils and pitfalls of subgroup analysis
Dr Chris Cates' article demonstrating why subgroup analysis can be untrustworthy.
| 0 Comments
Reporting results of studies
Dr Chris Cates' article discussing how to report study results, with emphasis on P-values and confidence intervals.
| 0 Comments
AllTrials: All Trials Registered | All Results Reported
AllTrials aims to correct the situation in which studies remain unpublished or are published but with selective reporting of outcomes.
| 0 Comments
Association is not the same as causation. Let’s say that again: association is not the same as causation!
This article explains how to tell when correlation or association has been confused with causation.
| 0 Comments
Shared Decision-Making
This resource from the Health Foundation shows how shared decision-making can be made to work in a typical consultation.
| 0 Comments
Connecting researchers with people who want to contribute to research
People in Research connects researchers who want to involve members of the public with members of the public who want to get involved.
| 0 Comments
Why are fair tests of treatments needed?
In this sub-section Nature, the healer (this page) The beneficial effects of optimism and wishful thinking The need to go […]
| 2 CommentsNo Resources Found
Try clearing your filters or selecting different ones.