Is Real-World Evidence all it’s trumped-up to be?

There’s so much discussion around Big Data and Real-World Evidence (RWE) in improving access to medicines. RWE is clinical data collected after the publication of clinical trials used for marketing authorization. Sources may include health insurance claims databases, patient registries, hospital patient records, among others. Clinical trials are by design the gold standard for new evidence development. Some of the discussion is best described as voodooism, and some of it is real, so, given these differences in opinion, we felt it timely to add our perspective to the mix. Is RWE all it’s trumped-up to be? Our answer is “perhaps”.

The use of “real-world” in RWE suggests that clinical trials are somehow fake, as in fake-world evidence (FWE), the antithesis of RWE. It also suggests that clinical trial evidence is artificial and insufficient to respond to the daily challenges of improving and resourcing healthcare systems. Clinical trials are rigorous experiments limited to smaller patient samples compared to RWE. It may also be because clinical trials are controlled and disconnected from practice that they lose their relevance.

The arguments levelled against the features of clinical trials are what make them “real-world evidence”, and the arguments levelled against the features of RWE are what make them “fake-world evidence”. Let’s explore this idea, but first a quick primer on random sampling and probability. Randomness, in statistical terms, is the absence of predictability and order in events. Clinical trials aim to identify consistent patterns in a world of randomness, and they’re the gold standard in study design. An essential feature is the pure chance of selecting a participant for a study arm, that is, the probability of selecting an individual is the same for all study participants. If the probability of selecting a participant is higher compared to others in the sampling frame, then selection bias erodes the accuracy of the study results. That is, randomizing clinical trial participants across study interventions reduces the possibility of bias changing the trial outcome.

RWE does a poor job of handling questions on randomness, study design, probabilities, selection bias, and the sampling frame. RWE advocates promote the notion that the world is predictable and orderly, and all that’s needed is an observational study to compare results with the clinical trial. Typically the question is: “Does the product work in-market as it did in the clinical trial?” Advanced statistical approaches are available to reduce bias in observational studies, but they do not operationalize the core features of clinical trials. Observational studies may have a place in monitoring adverse events, tracking resource consumption, and facilitating the extraction of cost estimates used in pharmacoeconomic evaluations. However, observational studies are a poor surrogate for efficacy-related questions answered by clinical trials.

Pharmaceutical companies, health insurance companies and government agencies should be open to using observational studies and clinical trials in decision-making. However, brushing the weaknesses of RWE under the carpet will result in poor policies. What are the implications of overlooking critical details?

  • Stakeholders may inadvertently perpetuate the inequitable distribution and use of healthcare resources,
  • Reimbursement decisions may be inappropriately misled with results skewed by selection bias,
  • RWE may amplify the impact of conflicts of interest when the roles of data collection, data analysis, data ownership, data application and use, study funding, and policy-making are blurred.

Is RWE all it’s trumped-up to be? Perhaps. It depends on the project research question and how observational studies are used. It also depends on how stakeholders engage questions on randomness, study design, probabilities, selection bias, and the sampling frame. We need a science-based approach to evidence development, less voodooism. We need a public discourse on conflicts of interest and how they may bias study results. Finally, we need improved science literacy among all stakeholders, especially in developing countries, to enable constructive critique on the use of observational studies

Tags: , , , , , , ,

No comments yet.

Leave a Reply

Carapinha Logo