Informed Health Choices Primary School Resources
A textbook and a teachers’ guide for 10 to 12-year-olds. The textbook includes a comic, exercises and classroom activities.
| 0 Comments | EvaluatedConfidence Intervals – CASP
The p-value gives no direct indication of how large or important the estimated effect size is. So, confidence intervals are often preferred.
| 0 Comments | EvaluatedSunn Skepsis
Denne portalen er ment å gi deg som pasient råd om kvalitetskriterier for helseinformasjon og tilgang til forskningsbasert informasjon.
| 0 Comments‘Tricks to help you get the result you want from your study (S4BE)
Inspired by a chapter in Ben Goldacre’s ‘Bad Science’, medical student Sam Marks shows you how to fiddle research results.
| 0 CommentsReporting the findings: Absolute vs relative risk
Absolute Differences between the effects of two treatments matter more to most people than Relative Differences.
| 0 CommentsFish oil in the Observer: the return of a $2bn friend
Ben Goldacre draws attention to people’s wish to believe that a pill can be the solution to a complicated problem.
| 0 CommentsDodgy academic PR
Ben Goldacre: 58% of all press releases by academic institutions lacked relevant cautions and caveats about the methods and results reported
| 0 CommentsAll bow before the mighty power of the nocebo effect
Ben Goldacre discusses nocebo effects, through which unpleasant symptoms are induced by negative expectations, despite no physical cause.
| 0 CommentsHow do you regulate Wu?
Ben Goldacre finds that students of Chinese medicine are taught (on a science degree) that the spleen is “the root of post-heaven essence”.
| 0 CommentsScience is about embracing your knockers
Ben Goldacre: “I don’t trust claims without evidence, especially not unlikely ones about a magic cream that makes your breasts expand.”
| 0 CommentsNMT are suing Dr Wilmshurst. So how trustworthy are this company? Let’s look at their website…
Ben Goldacre celebrates Peter Wilmshurst, the doctor who blew the whistle on research misconduct in a study to which he was a contributor.
| 0 CommentsOver there! An 8 mile high distraction made of posh chocolate!
Ben Goldcare illustrates strategies used by vested interests to discredit research with ‘inconvenient’ results.
| 0 CommentsBrain imaging studies report more positive findings than their numbers can support. This is fishy.
Ben Goldacre explores how twice as many positive findings as could realistically have been expected from the data reported may have occurred
| 0 CommentsWhat if academics were as dumb as quacks with statistics?
Ben Goldacre introduces a statistical error that appears in about half of all the published papers in academic neuroscience research.
| 0 CommentsThe strange case of the magnetic wine
Ben Goldacre shows how claims for the wine-maturing effects of magnets could be assessed with 50 people in an evening.
| 0 CommentsScreen test
Ben Goldacre notes that even if people realize that screening programmes have downsides, people don’t regret being screened.
| 0 CommentsSampling error, the unspoken issue behind small number changes in the news
Ben Goldacre stresses the importance of taking account of “sampling variability” and confidence intervals.
| 0 CommentsThe certainty of chance
Ben Goldacre reminds readers how associations may simply reflect the play of chance, and describes Deming’s illustration of this.
| 0 CommentsPublish or be damned
Ben Goldacre points out the indefensible practice of announcing conclusions from research studies which haven’t been published.
| 0 CommentsHow myths are made
Ben Goldacre draws attention to Steven Greenberg’s forensically based illustration of citation biases.
| 0 CommentsForeign substances in your precious bodily fluids
Ben Goldacre points out that there is no evidence giving strong support either to water fluoridationists or to anti-fluoridationists.
| 0 CommentsIs it okay to ignore results from people you don’t trust?
Ben Goldacre: why it’s important to consider vested interests when judging research, but not to dismiss research by people you don’t like.
| 0 CommentsCherry picking is bad. At least warn us when you do it.
Ben Goldacre illustrates how biased ‘cherry picking’ and choosing from the relevant evidence can result in unreliable conclusions.
| 0 CommentsWhy won’t Professor Susan Greenfield publish this theory in a scientific journal?
Ben Goldacre challenges senior Oxford professor to publish the evidence supporting her claim that computer games cause dementia in children.
| 0 CommentsTipsheet for reporting on drugs, devices and medical technologies
Questions that will be familiar to reporters covering health and medicine.
| 0 CommentsTips for understanding Intention-to-Treat analysis
Ignoring non-compliance with assigned treatments leads to biased estimates of treatment effects. ITT analysis reduces these biases.
| 0 CommentsTips for understanding Absolute vs. Relative Risk
Absolute Differences between the effects of two treatments matter more to most people than Relative Differences.
| 0 CommentsApplying Systematic Reviews
How useful are the results of trials in a systematic review when it comes to weighing up treatment choices for particular patients?
| 0 CommentsSystematic Reviews and Meta-analysis: Information Overload
None of us can keep up with the sheer volume of material published in medical journals each week.
| 0 CommentsCombining the Results from Clinical Trials
Chris Cates notes that emphasizing the results of patients in particular sub-groups in a trial can be misleading.
| 0 CommentsTips for understanding Non-inferiority Trials
A non-inferiority experiment endeavours to show that a new intervention is ‘not unacceptably worse’ than the comparison intervention.
| 0 CommentsCyagen is paying for citations
Pharmaceutical company Cyagen offers researchers and other writers $100 or more for citing their products in publications.
| 0 CommentsNo Power, No Evidence!
This blog explains that studies need sufficient statistical power to detect a difference between groups being compared.
| 0 CommentsBeginners guide to interpreting odds ratios, confidence intervals and p values
A tutorial on interpreting odds ratios, confidence intervals and p-values, with questions to test the reader’s knowledge of each concept.
| 0 CommentsSample Size matters even more than you think
This blog explains why adequate sample sizes are important, and discusses research showing that sample size may affect effect size.
| 0 CommentsWhat is it with Odds and Risk?
This blog explains odds ratios and relative risks, and provides the formulae for calculating both measures.
| 0 CommentsPreclinical animal studies: bad experiments cost lives
This blog notes that few therapies that treat disease in animals successfully translate into effective treatments for humans.
| 0 CommentsSurrogate Endpoints in EBM: What are the benefits and dangers?
What are surrogate outcomes, their pros and cons, and why you should be cautious in extrapolating from them to clinical decisions.
| 0 CommentsThe Systematic Review
This blog explains what a systematic review is, the steps involved in carrying one out, and how the review should be structured.
| 0 CommentsThe Mean: Simply Average?
This blog explains ‘the mean’ as a measure of average; describes how to calculate it; and flags up some caveats.
| 0 CommentsPublication Bias: An Editorial Problem?
A blog challenging the idea that publication bias mainly occurs at editorial level, after research has been submitted for publication.
| 0 CommentsThe Bias of Language
Publication of research findings in a particular language may be prompted by the nature and direction of the results.
| 0 CommentsDefining Bias
This blog explains what is meant by ‘bias’ in research, focusing particularly on attrition bias and detection bias.
| 0 CommentsBalancing Benefits and harms
A blog explaining what is meant by ‘benefits’ and ‘harms’ in the context of healthcare interventions, and the importance of balancing them.
| 0 CommentsData Analysis Methods
A discussion of 2 approaches to data analysis in trials - ‘As Treated’, and ‘Intention-to-Treat’ - and some of the pros and cons of each.
| 0 CommentsDefining Risk
This blog defines ‘risk’ in relation to health, and discusses some the difficulties in applying estimates of risk to a given individual.
| 0 CommentsTraditional Reviews vs. Systematic Reviews
This blog outlines 11 differences between systematic and traditional reviews, and why systematic reviews are preferable.
| 0 CommentsP Value in Plain English
Using simple terms and examples, this blog explains what p-values mean in the context of testing hypotheses in research.
| 0 CommentsCancer Screening Debate
This blog discusses problems that can be associated with cancer screening, including over-diagnosis and thus (unnecessary) over-treatment.
| 0 CommentsSurrogate endpoints: pitfalls of easier questions
A blog explaining what surrogate endpoints are and why they should be interpreted cautiously.
| 0 CommentsMisconceptions about screening
Screening should not be for everyone or all diseases. It should only be offered when it is likely to do good than harm.
| 0 CommentsMaking sense of randomized trials
A description of how clinical trials are constructed and analysed to ensure they provide fair comparisons of treatments.
| 0 CommentsClinical Significance – CASP
To understand results of a trial it is important to understand the question it was asking.
| 0 CommentsStatistical Significance – CASP
In a well-conducted randomized trial, the groups being compared should differ from each other only by chance and by the treatment received.
| 0 CommentsP Values – CASP
Statistical significance is usually assessed by appeal to a p-value, a probability, which can take any value between 0 and 1 (certain).
| 0 CommentsMaking sense of results – CASP
This module introduces the key concepts required to make sense of statistical information presented in research papers.
| 0 CommentsScreening – CASP
This module on screening has been designed to help people evaluate screening programmes.
| 0 CommentsRandomised Control Trials – CASP
This module looks at the critical appraisal of randomised trials.
| 0 CommentsCommon Sources of Bias
Bias (the conscious or unconscious influencing of a study and its results) can occur in different ways and renders studies less dependable.
| 0 CommentsIntroduction to JLL Explanatory Essays
Professionals sometimes harm patients by using inadequately evaluated treatments. Research addressing uncertainties can reduce this harm.
| 0 CommentsSurrogate markers may not tell the whole story
A webpage explaining the limitations of using surrogate outcome markers in clinical research.
| 0 CommentsTesting Treatments
Testing Treatments is a book to help the public understand why fair tests of treatments are needed, what they are, and how to use them.
| 0 CommentsMixed Messages about Statistical Significance
A webpage explaining the difference between statistical and practical significance.
| 0 CommentsScience fact or fiction? Making sense of cancer stories
A Cancer Research UK blog, explaining how to assess the quality of health claims about cancer.
| 0 CommentsDouble blind studies
A webpage discussing the importance of blinding trial participants and researchers to intervention allocation.
| 0 CommentsCEBM – Study Designs
A short article explaining the relative strengths and weaknesses of different types of study design for assessing treatment effects.
| 0 CommentsDISCERN online
A questionnaire providing a valid and reliable way of assessing the quality of written information on treatment choices.
| 0 Comments7 words (and more) you shouldn’t use in medical news
A webpage explaining that dramatic effects of medical treatments are very rare.
| 0 CommentsObservational Studies – does the language fit the evidence?
A webpage explaining observational studies and their advantage and disadvantages.
| 0 CommentsCASP: making sense of evidence
The Critical Appraisal Skills Programme (CASP) website with resources for teaching critical appraisal.
| 0 CommentsRelative or absolute measures of effects
Dr Chris Cates' article explaining absolute and relative effects of treatment effects.
| 0 CommentsEvidence from Randomised Trials and Systematic Reviews
Dr Chris Cates' article discussing control of bias in randomised trials and explaining systematic reviews.
| 0 CommentsThe perils and pitfalls of subgroup analysis
Dr Chris Cates' article demonstrating why subgroup analysis can be untrustworthy.
| 0 CommentsReporting results of studies
Dr Chris Cates' article discussing how to report study results, with emphasis on P-values and confidence intervals.
| 0 CommentsAllTrials: All Trials Registered | All Results Reported
AllTrials aims to correct the situation in which studies remain unpublished or are published but with selective reporting of outcomes.
| 0 CommentsAssociation is not the same as causation. Let’s say that again: association is not the same as causation!
This article explains how to tell when correlation or association has been confused with causation.
| 0 CommentsShared Decision-Making
This resource from the Health Foundation shows how shared decision-making can be made to work in a typical consultation.
| 0 CommentsConnecting researchers with people who want to contribute to research
People in Research connects researchers who want to involve members of the public with members of the public who want to get involved.
| 0 CommentsWhy are fair tests of treatments needed?
In this sub-section Nature, the healer (this page) The beneficial effects of optimism and wishful thinking The need to go […]
| 2 CommentsNo Resources Found
Try clearing your filters or selecting different ones.