Hello

Your subscription is almost coming to an end. Don’t miss out on the great content on Nation.Africa

Ready to continue your informative journey with us?

Hello

Your premium access has ended, but the best of Nation.Africa is still within reach. Renew now to unlock exclusive stories and in-depth features.

Reclaim your full access. Click below to renew.

Research findings can be made glitzy and misleading

We are constantly being told just what studies show, detergents manufacturers tell us how white our shirts can be if we use their product, pharmaceutical corporations instruct us that buying their pills will save our lives.

In every case, we are assured that the conclusions are supported by research studies.

Typically we trust these studies whose conclusions correspond to our opinion, but dismiss or ignore those that don’t. Is this rational? Can’t we learn anything concrete from the studies themselves? Little knowledge of probability can empower you to dissect the inferences made by the TV actors who wear a white coat, carry a stethoscope and claim that the study shows this is best for your health. Unfortunately the majority of people believe these fake actors.

The main question to ask is whether the medicine is really of help? To separate actual conclusions from “just luck” a p-value has to be considered. This is the probability that you would have observed such a surprising result. If the p-value is large the study does not approve the new innovation or new drug but if it is small it is highly unlikely that the results are purely due to luck and quite likely that this is a positive result. How small should the p-value be?

In health research the arbitrary standard is 5 per cent or less, which is one chance in 20 for results to be positive by luck. Can you believe that the 5 per cent limit arbitrarily chosen also allows the possibility that as many as one medical study in 20 may be wrong and misleading!

The 5 per cent limit has been widely used; this would for a lawyer translate as “beyond a reasonable doubt”. This concept of p-value is also closely related to the study sample size, how many participants were enrolled in the study is an important question to ask.

For a study to be valid another parameter that should be considered is to ensure that it is not biased. There are several types of bias to worry about. One occurs when patients are selected based on their condition. The p-value can be very small but only because the researchers cheated and this is referred to as sampling bias, to avoid this research participants should be strictly randomly enrolled. Another bias is the reporting bias; this is also cheating because the drug company may deliberately not report adverse events (negative effects) so that the drug under study looks safe. To avoid such biases, people should not or be allowed to conduct studies with vested interest in the outcome.

As kids, we learnt the trick that if your mom says no, then you ask your dad, thus doubling your chances. Unfortunately, there is a simple way for large companies to do the same thing called “publication bias.” This involves commissioning many studies and only publishing those whose conclusions are favorable to the company, while burying the rest. Funding contract explicitly include that investigators cannot publish without the company’s consent.

In causality studies one has to be careful not to confuse between correlation and causality. Classical example is lung cancer, is it caused by smoking or yellow stained fingers, which most smokers have. Both this conditions are caused by heavy smoking but strictly following data one can erroneously conclude that yellow finger stains cause lung cancer. These false conclusions can cause a lot of difficulties, parents would refuse to let their children play with yellow crayons, lest they stain their fingers and get lung cancer!!! Therefore it can be said that correlation does not imply causality.

If many studies fail to tell us accurately what causes what, do we have any hope of ever getting reliable information? Fortunately the answer is yes. The key is to use a randomized trial. What this means is that a study’s subjects, are assigned at random to one of the two different groups, without regard to any other factors about their health or wealth or anything else. The two groups are then given different treatments, new drug versus placebo, then if there is a statistical significant difference in the results between the two groups, it cannot be due to any other factor except either the new drug or placebo.

Finally all the statistics, like probability, p-value will never replace all of your other critical thinking skills and decision making methods, things like intuition and compassion and determination and honor and just plain common sense. But it will provide you with one more tool to better understand the world’s randomness and your place within it. At least with this tool the detergent advertisement or drug company advocacy and promotion should not fool you.

Zulfiqarali Premji is a retired MUHAS professor. His career spans over 40 years in academia, research and public health.