The Nation's Health

Heart disease prevention: Chicken Little

Clinical studies can be designed in a number of ways. The ease and cost of these studies differ dramatically, as does the confidence of the findings.

The most confident way to design a clinical study is to tell neither the participants nor the investigator(s) what treatment is being offered, then to administer treatment or placebo. Neither the people doing the research nor the participants know what they are receiving. Of course, there needs to be some way to find out what was given at the end of the study in order to analyze the outcome.

This is called a “double-blind, placebo-controlled” clinical study. While not perfect since it tends to examine a treatment phenomenon in isolation (e.g., the effects of a single drug in a select group of people), it is the best sort of study design that is most likely to yield confident results, both negative and positive. This sort of design is followed, for instance, for most prescription drugs.

There are pitfalls in such studies, of course, and some have made headlines lately. For instance, beyond tending to examine single conditions in a select group of participants, a double-blind, placebo-controlled study can also fail to uncover rare effects. If a study contains 5000 participants, for instance, but a rare complication develops in 1 person out of 20,000, then it’s unlikely such an ill-effect will be observed until larger numbers of people are exposed to the agent.

Another pitfall (though not so much of study design, but of human greed) is that study outcomes that are not favorable can be suppressed by simply failing to publish the results. This has undoubtedly happened numerous times over the years. For this reason, a registry has been created for all human clinical trials as a means to enforce publication of outcomes, both favorable and unfavorable.

Despite its weaknesses, the double-blind, placebo-controlled study design remains the most confident way to show whether or not some treatment does indeed yield some effect. It is less prone to bias from either the participant or the investigator. Human nature being what it is, we tend to influence results just to suit our particular agenda or interests. An investigator who knows what you are given, drug or placebo, but owns lots of stock in the company, or is hoping for special favors from the pharmaceutical company sponsor, for instance, is likely to perceive events in a light favorable to the outcome of the study.

Now, most studies are not double-blind, placebo-controlled studies. These are notoriously difficult studies to engineer; raise lots of ethical questions (can you not treat a person with an aggressive cancer, for instance, and administer a placebo?); often require substantial numbers of participants (thousands), many of whom may insist on payment for devoting their time, bodies, and perhaps even encountering some risk; and are tremendously expensive, costing many tens of millions of dollars.

For this reason, many other study designs are often followed. They are cheaper, quicker, may not even require the active knowledge or participation of the group being studied. That’s not to say that the participants are being tricked. It may simply be something like trying to determine if there are more heart attacks in people who live in cities compared to rural areas by comparing death rates from heart attack from public records and population demographic data. Or, a nutritional study could be performed by asking people how many eggs they eat each week and then contacting them every month for 5 years to see if they’ve had a heart attack or other heart event. No treatment is introduced, no danger is added to a person’s established habits. Many epidemiologic studies are performed this way.

The problem is that these other sorts of study designs, because they generate less confident results, are not generally regarded as proof of anything. They can only suggest the possibility of an association, an hypothesis. For real proof to occur, a double-blind, placebo-controlled may need to follow. Alternatively, if an association suggested by a study of lesser design might, by reasons of a very powerful effect, be sufficient. But this is rare. Thalidomide and catastrophic birth defects are an example of an association between a drug and fetal limb malformation that was so clear-cut that no further investigation was required to establish a causative association. Of course, no one in their right mind would even suggest a blinded study.

Where am I going with this tedious rambling? Lately, the media has been making a big to-do about several studies, none of which are double-blind, placebo-controlled, but were cross-sectional sorts of observations, the sorts of studies which can only suggest an effect. This happened with Dr. Steve Nissen’s study of Avandia (rosiglitazone) for pre-diabetes and risk for heart attack and the recent study suggesting that cancer incidence is increased when LDL cholesterol is low. Both were observations that suggested such associations.

Now, those of you following the Heart Scan Blog or the www.healthcare.gov website know that we do not defend drug companies nor their drugs. In fact, we’ve openly and repeatedly criticized the drug industry for many of its practices. Drugs are, in my opinion, miserably overused and abused.

But, as always, I am in the pursuit of truth. Neither of these studies, in my view, justified the sort of media attention they received. They are hypothesis-generating efforts—that’s it. You might argue that the questions raised are so crucial that any incremental risk of a drug is simply not worth it.

Despite the over-reaction to these studies, good will come of the fuss. I do believe that heightened scrutiny of the drug industry will result. Many people will seek to avoid prescription drugs and opt for healthy changes in lifestyle, thus reducing exposure to costs and side-effects.

But beware of the media, acting as our Chicken Little, reporting on studies that prove nothing but only raise questions.