Subscribe to our mailing list

* indicates required

Sunday, March 31, 2013

Nitric Oxide, Antidepressants, and "Sexual Side Effects"

Antidepressants, particularly those that affect serotonin metabolism, are famous for causing "sexual side effects." According to package inserts for the drugs, such side effects are relatively uncommon, but that's because patients in clinical trials don't readily volunteer information on sexual problems on their own; you have to ask them about it. When researchers have specifically asked patients about sexual side effects, the topic comes up 58% to 73% of the time, depending on the drug.

If you go looking in the scientific literature for an explanation of why "sexual side effects" happen at all, you'll mostly encounter rather vague, unsatisfying explanations. I did a little digging on my own and came up with what I think is a fairly edifying scientific explanation of what's going on. (From here, it gets a little technical. Bear with me if you can.)

"Sexual dysfunction" covers a lot of ground, and the gaps in our knowledge of the biochemistry underlying things like arousal and orgasm could better be called knowledge in our gaps. However, we do know that nitric oxide (NO) metabolism plays a huge role in stimulatory response. Clitoral erection as well as penile erection depend on production of NO and subsequent NO-induced accumulation of cyclic guanosine monophosphate, or cGMP (see this paper).

Nitric oxide is an interesting molecule. It's a gas at room temperature; soluble in oil as well as water; extremely corrosive; penetrates cell membranes with ease; and is probably the most widely distributed free radical in the human body. For many years, researchers knew of something called the endothelium-derived relaxing factor (EDRF), a substance produced by the inner cell lining of blood vessels, capable of signalling the surrounding smooth muscle to relax (resulting in vasodilation and increasing blood flow). It turns out EDRF and nitric oxide are one and the same. When NO is produced (from arginine and a bunch of cofactors, by the action of an enzyme called, shockingly enough, NO synthase), it triggers massive production of cGMP, which in turn activates protein kinases that (in turn) cause reuptake of calcium ion, which (through a bunch more steps) results in relaxation of smooth muscle.

You wouldn't think relaxation of muscle would be the key to getting an erection going, but that's because you're thinking of the wrong "muscle." Here, we're concerned with blood-vessel smooth muscle. Relaxation of that kind of smooth muscle is crucial to allowing blood to find its way into the structures that ultimately cause a clitoris or penis to become engorged. Without an increase in NO production and all the downstream effects that lead to vasodilation, there's no arousal.

Nitric oxide is not just a vasodilator, though. It's also a gasotransmitter, which is to say a gas that has neurotransmitter properties.

The primary target of SSRIs in the body is the SERT (serotonin transporter) protein, which is the agent responsible for "reuptake" of serotonin. At first glance, it's not at all obvious how serotonin reuptake plays a role in nitric oxide formation. However, an extremely alert group of researchers at the Centre National de la Recherche Scientifique in Montpellier, France, realizing that SERT is known to associate with various proteins having a so-called PDZ domain, and also realizing that neuronal nitric oxide synthase (nNOS) has such a domain, decided to do an experiment. They fused a carboxyl-terminal SERT peptide to Sepharose beads and poured mouse-brain homogenate over the beads to see what stuck. The most abundant protein to stick? None other than nNOS.

The French team found that when nNOS and SERT were coexpressed in a cell line, they bound to each other in vivo. They also found that exposing serotonin to a cell line expressing SERT and nNOS caused the production of NO. Thus, if you blockade SERT (cutting off serotonin reuptake), you interfere with NO production at the source, because nNOS (the enzyme responsible for producing NO) and SERT (the serotonin transporter) are, in fact, joined together, in vivo. (See this PNAS paper for an overview.) Interfering with NO production is bad, of course. No nitric oxide, no erection.

To me, this provides a pretty believable model of how SSRIs interfere with erectile function.

Before leaving the subject of nitric oxide production, I should mention that a huge amount of work has shown that NO metabolism is profoundly impaired in patients with diabetes. The work of the Centre National de la Recherche Scientifique team (outlined above) suggests a mechanism by which SSRIs interfere with NO metabolism. Another connection between SSRIs and diabetes?

And finally: It's thought by some that Minoxidil owes its hair-regrowing effects to the fact that it's a nitric-oxide agonist (Mi-NO-xidil). This suggests that hair loss would be an expected "side effect" of SSRIs. And indeed there are scattered reports of such in the literature.

Saturday, March 30, 2013

How Common Are "Sexual Side Effects"?

It's easy to get the impression, when reading popular articles about antidepressants, that drugs like Prozac, Zoloft, Paxil, Celexa, Cymbalta, Luvox, etc. are primarily psychoactive drugs that specifically alter brain chemistry. Indeed, this is what the drug companies want you to think. Depressed? Take this pill: it's designed to work on your brain. Will it cause side effects? Maybe, but they're just side effects.

This is a mistaken view of pharmacology. Drugs don't produce side effects. They just produce effects. Also, serotonin is not a brain chemical. It's a total body chemical. Well over 90% of the serotonin in your body is in your intestines and sex organs. Only 5% occurs in the brain. So when you take an SSRI, the drug reaches your whole body. It doesn't just head for the brain and then, incidentally, produce "side effects."

People who take antidepressants of the selective serotonin reuptake inhibitor (SSRI) class quickly realize this truth, namely that SSRIs are whole-body drugs, because the first effects most people notice (and complain about in clinical trials) are digestive and sexual-dysfunction effects. In clinical testing, SSRIs seldom fail to separate from placebo on those. If you're lucky enough to be one of the 50% or so of patients who see beneficial psychological effects, good for you, but in the meantime, the physiological effects (which can range from mild nausea to drowsiness to erectile dysfunction, or if you're really unlucky, diabetes or gastrointestinal bleeding) will be every bit as real as any effects on your brain.

How common are "sexual side effects" from SSRIs? If you read the package inserts for the drugs, they all downplay sexual side effects. The inserts rarely tell of more than 10% of patients complaining of ED, reduced libido, or difficulty reaching orgasm. The real world tells a far different story. In one of the largest prospective studies of its kind, the Spanish Working Group for the Study of Psychotropic-Related Sexual Dysfunction found that "the incidence of sexual dysfunction with SSRIs and venlaxafine [Effexor] is high, ranging from 58% to 73%." (Possibly, the remaining 27% to 42% of patients were still too depressed to care about sex.) The patients in question were taking Prozac (n=279), Zoloft (n=159), Luvox (n=77), Paxil (n=208), Effexor (n=55), or Celexa (n=66).

In the Spanish study, Paxil was associated with "significantly higher rates of erectile dysfunction/decreased vaginal lubrication" compared to other antidepressants. Meanwhile, "males had a higher rate of dysfunction than females (62.4% vs. 56.9%), but females experienced more severe decreases in libido, delayed orgasm, and anorgasmia."

Some studies of sexual side effects have shown a dose-response relationship. What's interesting about this is that most SSRIs have a flat dose-response curve for psychological effects. In other words, the physiological (sexual) effects are dose-dependent, but the effects on mood generally are not. I'll devote more discussion to the latter in a later post. The takeaway for now is that if you're on an SSRI and you don't like the sexual side effects, ask your doctor to reduce your dosage to the minimum effective therapeutic dose (because taking more than that generally does no good anyway). A second takeaway is: If your doctor keeps upping your dose, it means he or she hasn't read the literature. The literature says that beyond a certain dose, more doesn't do anything.

Tomorrow: A look at why and how SSRIs mess up your sex life (latest biochemical findings). 

Friday, March 29, 2013

SSRIs, Weight Gain, and Diabetes

Yesterday I talked about the connection between antidepressant usage and diabetes. It may seem odd, at first, that antidepressants should double one's risk of diabetes, but given the pervasive involvement of serotonin in appetite and digestive processes (and the ability of antidepressants to make you fat; see discussion below), perhaps it shouldn't really come as such a shock.

To put things in perspective: Around 95% of the body's serotonin can be found in the gastrointestinal tract; only 5% is in your brain [reference]. Serotonin is the primary signalling molecule involved in motility, secretion, and vasodilation within the intestines, and just as in the brain, bioavailability of serotonin to target cells in the gut is dependent on the serotonin reuptake transporter (SERT). SERT, in turn, is the binding target of selective serotonin reuptake inhibitors. When you take an SSRI like Prozac, Zoloft, Paxil, etc., you're medicating your intestines (and your nearby reproductive system, which is highly innervated and dependent on serotonin for proper functioning); and also, your brain. So it should surprise no one that the main physiologic effects of SSRI administration are, in fact, on gut, sex organs, and brain. If you were thinking you were mainly medicating your brain, with your sex organs and gut experiencing "side effects," you've essentially got it backwards. Serotonin's main effect in your body is keeping your gastrointestinal system functioning. (It's worth noting that low-dose SSRIs have been used to treat irritable bowel syndrome, but it should also be noted that selective serotonin reuptake inhibitors increase the risk of upper gastrointestinal bleeding, particularly when used with NSAIDs.) For more on serotonin's action in the gut, see this paper and also this paper. I also recommend "Serotonin receptors and transporters — roles in normal and abnormal gastrointestinal motility," in Alimentary Pharmacology & Therapeutics ,Volume 20, Issue Supplement s7, pages 3–14, November 2004 (full article here).

Weight control is at least partially mediated through the 5-HT2c serotonin receptor in the hypothalamus. We know this is true because "knockout" mice with a targeted mutation of the 5-HT2c receptor gene engage in chronic hyperphagia (overeating), leading to obesity and hyperinsulemia [reference]. We also know this because obese humans who've been exposed to the potent 5-HT2c receptor agonist m-chlorophenylpiperazine (mCPP) experience weight loss and appetite changes. There's also very good evidence that polymorphisms in the promoter region of the 5-HT2c receptor gene play a direct role in obesity and diabetes in humans. These kinds of interactions led the authors of a report in Nature Medicine 4, 1152-1156 (1998) to state categorically that "a perturbation of brain serotonin systems can predispose to type 2 diabetes."

Finally, it's worth pointing out that some SSRIs (Prozac in particular) exhibit direct action on 5-HT2c receptors (and not just on SERT).

So bottom line, there's plenty of reason to believe that antidepressants can play a direct role in fostering diabetes.

And then there's the small matter of weight gain.

Most doctors tell their patients that weight gain is not a problem with SSRIs (that it's mainly a problem with older tricyclic drugs), but this is a myth. Drug trials of the kind that lead to FDA approval of antidepressants are far too short in duration to bring out long-term weight-change trends, which is why weight gain is hardly ever mentioned in package inserts for SSRIs. (In the rare instance when weight gain is mentioned, it's usually painted as restoration of appetite due to recovery from depression, which is self-serving nonsense, IMHO.)

In a study called "Real-World Data on SSRI Antidepressant Side Effects," published in Psychiatry (Edgmont). 2009 February; 6(2): 16–18, real-world patients taking citalopram (Celexa), escitalopram (Lexapro), fluoxetine (Prozac), paroxetine (Paxil), and sertraline (Zoloft) were asked about side effects. Among the 229 patients who noted specific side effects, the three most common side effects, by far, were sexual dysfunction, sleepiness, and weight gain (see graph below). All three occurred at about the same rate (56, 53, and 49 reports, respectively, out of 229 total).

Real-world SSRI users reported weight gain almost as often as sexual
side-effects. From

How much weight gain are we talking about? Very substantial weight gain. Long-term studies have reported mean weight gains of 15 lb (6.75 kg) for sertraline (Zoloft), 21 lb (9.45 kg) for fluoxetine (Prozac), and 24 lb (10.80 kg) for paroxetine (Paxil). Citalopram (Celexa) is often painted as being less likely to cause weight gain than other antidepressants, and yet in one trial, 8 of 18 patients reported an average weight gain of 15.7 lb (7.1 kg) after receiving citalopram for just 5 weeks. See this table for a rundown of weight-gain effects for various popular antidepressants

A Norwegian study (Raeder MB, Bjelland I, Vollset SE, Steen VM, "Obesity, dyslipidemia, and diabetes with selective serotonin reuptake inhibitors: the Horland Health Study," J Clin Psychiatry 2006; 671974-1982) found:
We observed an association between use of SSRIs as a group (N = 461) and abdominal obesity (OR = 1.40, 95% CI = 1.08 to 1.81) and hypercholesterolemia (OR = 1.36, 95% CI = 1.07 to 1.73) after adjusting for multiple possible confounders. There was also a trend toward an association between SSRI use and diabetes.
The Norwegian study involved patients taking paroxetine, citalopram, sertraline, fluoxetine, and/or fluvoxamine

The bottom line: Weight gain is a serious issue with SSRIs (not just older antidepressants), and the association of SSRIs with increased risk of diabetes is not a statistical fluke of some kind, but a very real outcome. Given what we know about serotonin's role in appetite, weight control, and gastrointestinal function, none of this should come as a surprise.

Thursday, March 28, 2013

New Diabetes Risk Factor: Antidepressants

If people knew that taking antidepressants for two years or more doubles their risk of diabetes, would they continue to take them? That's what I wondered when I saw the paper by Andersohn et al. in Am J Psychiatry 2009; 166:591–598 called "Long-Term Use of Antidepressants for Depressive Disorders and the Risk of Diabetes Mellitus." It's an eye-opening study, drawing on data from the U.K.'s patient database. The numbers are solid and the results hard to argue with. Even after controlling for body mass index (BMI), hypertension, hyperlipidemia, smoking, age, and other factors, the authors of the study found that long-term use of antidepressants (of any kind: tricyclic, MAOI, SSRI) was associated with an almost two-fold greater risk of diabetes.

This is a shocking result, because it indicates that antidepressants add significantly to the burden of disease. In the U.S., where 27 million people take antidepressants (60% of them for two years or longer), it could mean an extra million cases of diabetes.

From the 1988-1994 time period to the 2005-2008 period, antidepressant usage in the U.S. rose 400%, according to the Centers for Disease Control. This corresponds with an almost-quadrupling of diabetes cases in the same time frame.

Before you start thinking that maybe depression in and of itself is predisposing to weight gain and diabetes (which it is), go read the Andersohn paper. The authors already thought of such things and controlled for them in their study control populations. They found that even after controlling for the usual risk factors, recent long-term (24 months or more) antidepressant usage increased the risk of diabetes by 84%. (Consult the paper for a list of the 29 antidepressants included in the analysis and the individual risk ratios for each.)

The Andersohn study was motivated by an earlier finding that continuous antidepressant use over an average study duration of 3.2 years was associated with an 2.6-fold increased risk of diabetes (95% CI=1.37–4.94) in the placebo arm and 3.39-fold increase in risk (95% CI=1.61–7.13) in the lifestyle intervention arm of the study reported in Diabetes Care. 2008 Mar;31(3):420-6. The Andersohn study confirms the previous finding.

Independent confirmation of the foregoing results can be found in a 2010 cross-sectional study of patients in Finland. Mika Kivimäki et al., writing in Diabetes Care, December 2010 33:12, 2611-261, reported finding a two-fold increased risk of Type 2 diabetes in patients who had taken 200 or more "defined daily doses" (about six months' worth) of antidepressant medication. Stratification by antidepressant type found no significant difference for tricyclics versus SSRIs. Interestingly, diabetes risk was higher for patients who had taken 400 or more daily doses versus those who'd taken 200 to 400 daily doses, indicating a kind of dose-response relationship. The longer you're on meds, the more likely you'll get diabetes.

The graph depicted further above (from CDC's Diabetes Report Card 2012) shows pretty clearly that if there's one thing America doesn't need right now, it's more cases of diabetes. Diabetes is already out of control in the U.S. In many counties (all of those shown in dark red below), diabetes already afflicts more than 11% of the population.

We already know that high body mass index, out-of-band blood lipids, inactivity, and age are important risk factors for diabetes. But we now know a major new risk factor: antidepressants. As Richard R. Rubin writes in US Endocrinology, 2008;4(2):24-7:
Applying current estimates of the number of people in the US who have prediabetes (57 million with impaired glucose tolerance or impaired fasting glucose), and estimates of the prevalence of antidepressant use among adults in the US (at least 10%), it would seem that almost six million people in the US have pre-diabetes and are taking antidepressants. This is a fairly large number of people, and if future research confirms that antidepressants are an independent risk factor for type 2 diabetes, efforts to minimize the potentially negative effects of these agents on glycemic control should be pursued.

Wednesday, March 27, 2013

Heatmap Visualization of Antidepressants

With so many antidepressants on the market now, it's hard to keep them straight. The categorizations that have been offered thus far are not uniformly helpful for understanding the drugs' in vivo targets. For example, a term like "tricyclic" refers to chemical structure of the drug molecule itself (and tells you nothing about what it does in the brain). Likewise, a more descriptive term like Selective Serotonin Reuptake Inhibitor is somewhat misleading in that many SSRIs actually have a fairly complex binding profile with regard to neurotransmitter receptors. For example, in addition to its action at the serotonin transporter (SERT), Prozac (fluoxetine) has potent 5-HT2C receptor antagonist effects along with being an agonist of the σ1-receptor. And then you have the fact that a drug like clomipramine is classified as a tricyclic, yet pharmacologically it shows a great deal of similarity with SSRIs. It can all get confusing in a hurry.

So the question is: How can we visualize what these drugs are doing, without reading long descriptions of their biological activities or trying to digest tables of receptor binding constants? Well, it turns out a team in the Netherlands has come up with an interesting way of visualizing antidepressant modes of action at a glance. It's shown in the graphic below.

Derijks et al. presented this nifty heatmap in "Visualizing Pharmacological Activities of Antidepressants: A Novel Approach," The Open Pharmacology Journal, 2008, 2, 54-62. They took the known binding constants for various antidepressants and calculated "receptor occupancies" for these drugs at the 5-HT (5-hydroxytryptamine; i.e. serotonin) reuptake transporter, 5-HT2c-receptor, histamine H1-receptor, norepinephrine reuptake transporter, alpha1-receptor, and muscarine M3-receptor, then subjected the results to principal component analysis to arrive at a hierarchical cluster scheme based on bioactivity homologies. What they found is that the 20 antidepressants they looked at grouped naturally into four main clusters. Within the clusters, drugs grouped together based (again) on similarity of activity at binding sites.

Basically, you can look at any given row in this picture as a kind of "barcode" (or fingerprint, if you like) for a particular drug based on its own particular mode(s) of action.

Buproprion (Wellbutrin) shows up as a solid yellow stripe, which might seem odd until you realize the Derijks group did not attempt to include dopamine-transporter occupancy in its model (nor did it look at nicotinic acetylcholine receptor binding, also important for bupropion). The group said they tried looking at dopamine transporter bindings but found it "did not change the overall classification in four clusters," so they left it out. For visualization purposes, it would have been nice if they'd left it in.

I think this kind of visual representation of drug activity is a very useful tool for differentiating antidepressants, and if psychiatrists (and nurse practitioners) knew about it they could use it to aid the decisionmaking process when it comes to trying a non-responsive patient on a different class of drug. If a drug from Cluster 1 doesn't work, it's only logical to try a drug from a different cluster rather than (say) a different drug from the same cluster.

In case you're having trouble with the generic drug names shown in the above figure, here's a table giving the translations between generic and trade names:

Generic Name Trade Name (and Category)
amitriptyline Elavil, Saroten (TCA)
bupropion Wellbutrin, Zyban
citalopram Celexa, Cipramil (SSRI)
clomipramine Anafranil (TCA)
doxepin Adapine, Sinequan (TCA)
duloxetine Cymbalta, Ariclaim (SNRI)
escitalopram Lexapro, Cipralex (SSRI)
fluoxetine Prozac, Sarafem (SSRI)
fluvoxamine Luvox (SSRI)
imipramine Tofranil (TCA)
maprotiline Deprilept, Ludiomil (TCA)
mianserin Bolvidon, Norval (TeCA)
mirtazapine Remeron, Avanza (TeCA)
nefazodone Serzone, Nefadar
nortriptyline Aventyl, Pamelor (TCA)
paroxetine Paxil, Seroxat (SSRI)
reboxetine Edronax, Prolift (NRI)
sertraline Zoloft, Lustral (SSRI)
trazodone Desyrel, Deprax (SARI)
venlafaxine Effexor (SNRI)
NRI = Norepinephrine Reuptake Inhibitor
SARI = Serotonin Antagonist and Reuptake Inhibitor
SNRI = Serotonin-Norepinephrine Reuptake Inhibitor
SSRI = Selective Serotonin Reuptake Inhibitor
TCA = Tricyclic Antidepressant
TeCA = Tetracyclic Antidepressant

Tuesday, March 26, 2013

Is Depression Really Biochemical?

Direct-to-consumer ads by drug companies have been highly successful in convincing doctors as well as patients (and their families) that mental illness has biological, indeed biochemical, roots. Surprisingly few people question that depression is due to a "chemical imbalance in the brain," even though the slightest bit of questioning would lead you on a search of the literature that uncovers zero evidence supporting such a theory. Saying that because SSRIs are effective in treating depression (for a minority population of depression sufferers), therefore depression is a disease of serotonin imbalance in the brain, is like saying saying that because I have unfocused thoughts in the morning unless I drink coffee, therefore there's a disease, Morning Unfocused Thought Disorder, that's caused by a chemical imbalance of adenosine in my brain. (Caffeine is an antagonist of adenosine receptors.) From there, it's only one step away to the chocolate-imbalance theory of lovesickness, and then but another short step to the (once well accepted by doctors) masturbation theory of insanity.

The biochemical-imbalance theorists have yet to reconcile the fact that for some people (those who respond to SSRIs), depression seems to involve serotonin whereas for others (those who respond to SNRIs) it seems to involve norepinephrine and serotonin, whereas for others (those who respond to drugs like mirtazapine) it seems to involve norepinephrine and dopamine and serotonin (or dopamine and norepinephrine in the case of bupropion), whereas for others (those who respond to tricyclics) it involves an intricate combination of imbalances related to actions at the serotonin and norepinephrine and dopamine transporters (SERT and NET and DAT), the H1 histamine receptor, the 1A and 2A serotonin receptors, α1 and α2 adrinergic receptors, the D2 dopamine receptor, and the muscarinic acetylcholine receptor. That's an awful lot of different types of "chemical imbalance," for one disease. The literature shows (see, e.g., the meta-analyses by Kirsch and others, and the STAR*D study) that depressed patients respond more-or-less equally well to any of the major categories of antidepressants, basically proving that the drugs are not highly specific in their effects on patient subpopulations. If they were indeed highly specific to certain subpopulations (if some patients specifically needed an SNRI, whereas others specifically needed an SSRI, whereas others needed a tricyclic, etc.) then the patient subpopulations would add up to more than 100% of the total patient population, based on how many people tend to respond to each type of drug.

And then there's the somewhat curious fact that tianeptine, an antidepressant that has been sold for many years under the name Coaxil in Europe and South America, is actually a selective serotonin reuptake enhancer (not inhibitor). So apparently, some depression is caused by too much (rather than too little) free serotonin.

Studies that have tried to induce depressive symptoms in normal subjects by altering their neurotransmitter balances have failed to do so. (E.g., Salomon et al., "Lack of behavioral effects of monoamine depletion in healthy subjects," Biological Psychiatry, 1 January 1997, 41:1, 58–64.) This elementary result is rarely discussed.

For more on this general subject, I recommend the paper "The Chemical Imbalance Explanation for Depression: Origins, Lay Endorsement, and Clinical Implications" by Christopher M. France, Paul H. Lysaker, and Ryan P. Robinson, in Professional Psychology: Research and Practice, 2007, 38:4, 411–420 (full PDF here).

In a J Clin Psychiatry 59: 4–12, researchers from the US National Institute of Mental Health Laboratory of Clinical Science caution: "[T]he demonstrated efficacy of selective serotonin reuptake inhibitors…cannot be used as primary evidence for serotonergic dysfunction in the pathophysiology of these disorders." Yet the medical industry (from clinicians to drug makers to advocacy groups) continues to promote a serotonin-imbalance theory of depression, as if it's accepted fact. It's not only not fact, it's a bit bizarre.

The Zoloft web site promotes Zoloft (an SSRI) as a treatment for Major Depressive Disorder (MDD), Obsessive-Compulsive Disorder (OCD), Panic Disorder, Posttraumatic Stress Disorder (PTSD), Premenstrual Dysphoric Disorder (PMDD), and Social Anxiety Disorder. As the authors of a recent (2005) paper in PLoS Medicine noted: "For the serotonin hypothesis to be correct as currently presented, serotonin regulation would need to be the cause (and remedy) of each of these disorders. This is improbable, and no one has yet proposed a cogent theory explaining how a singular putative neurochemical abnormality could result in so many wildly differing behavioral manifestations." See Lacasse, J.R., and Leo, J. (2005), "Serotonin and Depression: A Disconnect between the Advertisements and the Scientific Literature," PLoS Med 2(12):e392.

The Code of Federal Regulations under which direct-to-consumer drug advertising is regulated states that an advertisement may be cited as false or misleading if it "[c]ontains claims concerning the mechanism or site of drug action that are not generally regarded as established by scientific evidence by experts qualified by scientific training and experience without disclosing that the claims are not established and the limitations of the supporting evidence…" Direct-to-consumer advertisements are also forbidden to include content that "contains favorable information or opinions about a drug previously regarded as valid but which have been rendered invalid by contrary and more credible recent information." Despite this, we still find (for example) the Paxil website saying: "Paxil can help restore the balance of serotonin (a naturally occurring chemical in the brain) -- which helps reduce the symptoms of anxiety and depression." (Retrieved 24 Mar 2013.) And yet the FDA has never cited a pharmaceutical company for these sorts of falsehoods, presented over and over again in their advertising about antidepressants. There simply is no evidence that depression is caused by an imbalance of serotonin (or anything else) in the brain, any more than anxiety among smokers is caused by a lack of nicotine in the brain.

It would be easier to accept the many neurotransmitter-imbalance theories of depression if the drugs in question worked with the same high degree of efficacy that, say, aspirin works for a headache or that insulin does for diabetes, but in fact the drugs work so poorly that the number one bestselling drug in America right now is an adjunctive drug sold on the basis of helping antidepressants work better (Abilify). When I mentioned to a (non-depressed) friend of mine that the retail price of a month's worth of Abilify (5mg, 30 pills) is a thoroughly unconscionable $683 (making Abilify many times more valuable than pure gold), his comment was: "Why don't you just go lease a new Acura and see if that doesn't cheer you up? It's cheaper, and more satisfying."

Personally, I think he's right. Everybody on Medicare and Medicaid who's receiving Abilify at low or no cost right now (via government subsidy) should be offered a choice: continue to receive Abilify, or let Uncle Sam put you in a new Acura.

I wonder what people would choose?

Sunday, March 24, 2013

New Patents Aim to Reduce Placebo Effect

The pharma industry has a big problem on its hands: Placebos are getting to be way too effective. Something needs to be done. But what? What can you do about placebo response? The old saying "It is what it is" would seem to hold true in this case.

One answer: come up with low-placebo-response study designs, and patent them if possible. (And yes, it is possible. But we're getting ahead of the story.) 

Placebo effect has always been a problem for drug companies, but it's especially a problem for low-efficacy drugs (psych meds, in particular). An example of the problem is provided by Eli Lilly. In a March 29, 2009 press release announcing the failure of Phase II trials involving a new atypical antipsychotic known as LY2140023 monohydrate or mGlu2/3, Lilly said:
In Study HBBI, neither LY2140023 monohydrate, nor the comparator molecule olanzapine [Zyprexa], known to be more effective than placebo, separated from placebo. In this particular study, Lilly observed a greater-than-expected placebo response, which was approximately double that historically seen in schizophrenia clinical trials. [emphasis added]
Fast-forward to August 2012: Lilly throws in the towel on mGlu2/3. According to a report in Genetic Engineering and Technology News, "Independent futility analysis concluded H8Y-MC-HBBN, the second of Lilly's two pivotal Phase III studies, was unlikely to be positive in its primary efficacy endpoint if enrolled to completion."

Lilly is not alone. Rexahn Pharmaceuticals, in November 2011, issued a press release about disappointing Phase IIb trials of a new antidepressant, Serdaxin, saying: "Results from the study did not demonstrate Serdaxin’s efficacy compared to placebo measured by the Montgomery-Asberg Depression Rating Scale (MADRS). All groups showed an approximate 14 point improvement in the protocol defined primary endpoint of MADRS."

In March 2012, AstraZeneca threw in the towel on an adjunctive antidepressant, TC-5214, after the drug failed to beat placebo in Phase III trials. A news account put the cost of the failure at half a billion dollars.

In December 2011, shares of BioSante Pharmaceutical Inc. slid 77% in a single session after the company's experimental gel for promoting libido in postmenopausal women failed to perform well against placebo in late-stage trials.

The drug companies say these failures are happening not because their drugs are ineffective, but because placebos have recently become more effective in clinical trials. (For evidence on increasing placebo effectiveness, see yesterday's post, where I showed a graph of placebo efficacy in antidepressant trials over a 20-year period.)

Some idea of the desperation felt by drug companies can be glimpsed in this slideshow (alternate link here) by Anastasia Ivanova of the Department of Biostatistics, UNC at Chapel Hill, which discusses tactics for mitigating high placebo response. The Final Solution? Something called The Sequential Parallel Comparison Design.

SPCD is a cascading (multi-phase) protocol design. In the canonical two-phase version, you start with a larger-than-usual group of placebo subjects relative to non-placebo subjects. In phase one, you run the trial as usual, but at the end, placebo non-responders are randomized into a second phase of the study (which, like the first phase, uses a placebo control arm and a study arm). SPCD differs from the usual "placebo run-in" design in that it doesn't actually eliminate placebo responders from the overall study. Instead, it keeps their results, so that when the phase-two placebo group's data are added in, they effectively dilute the higher phase-one placebo results. The assumption, of course, is that placebo non-responders will be non-responsive to placebo in phase two after having been identified as non-responders in phase one. In industry argot, there will be carry-over of (non)effect from placebo phase one to placebo phase two.

This bit of chicanery (I don't know what else to call it) seems pointless until you do the math. The Ivanova slideshow explains it in some detail, but basically, if you optimize the ratio of placebo to study-arm subjects properly, you end up increasing the overall power of the study while keeping placebo response minimized. This translates to big bucks for pharma companies, who strive mightily to keep the cost of drug trials down by enrolling only as many subjects as might be needed to give the study the desired power. In other words, maximizing study power per enrollee is key. And SPCD does that.

SPCD was first introduced in the literature in a paper by Fava et al., Psychother Psychosom. 2003 May-Jun;72(3):115-27, with the interesting title "The problem of the placebo response in clinical trials for psychiatric disorders: culprits, possible remedies, and a novel study design approach." The title is interesting in that it paints placebo response as an evil (complete with cuplrits). In this paper, Maurizio Fava and his colleagues point to possible causes of increasing placebo response that have been considered by others ("diagnostic misclassification, issues concerning inclusion/exclusion criteria, outcome measures' lack of sensitivity to change, measurement errors, poor quality of data entry and verification, waxing and waning of the natural course of illness, regression toward the mean phenomenon, patient and clinician expectations about the trial, study design issues, non-specific therapeutic effects, and high attrition"), glossing over the most obvious possibility, which is that paid research subjects (for-hire "volunteers"), who are desperate, in many cases, to obtain free medical care, are only too willing to tell researchers whatever they want to hear about whatever useless palliative is given them. But then Fava and his coauthors make the baffling statement: "Thus far, there has been no attempt to develop new study designs aimed at reducing the placebo effect." They go on to present SPCD as a more or less revolutionary advance in the quest to quelch placebo effect.

Up until this point in science, I don't think there had ever been any discussion, in a scientific paper, of a need to attack placebo effect as something bothersome, something that interferes with scientific progress, something that needs to be guarded against vigilantly like Swine Flu. The whole idea that placebo effect is getting in the way of producing meaningful results is repugnant, I think, to anyone with scientific training.

What's even more repugnant, however, is that Fava's group didn't stop with a mere paper in Psychotherapy and Psychosomatics. They went on to apply for, and obtain, U.S. patents on SPCD (on behalf of The General Hospital Corporation of Boston). The relevant U.S. patent numbers are 7,647,235; 7,840,419; 7,983,936; 8,145,504; 8,145,505, and 8,219,41, the most recent of which was granted July 2012. You can look them up on Google Patents.

The patents begin with the statement: "A method and system for performing a clinical trial having a reduced placebo effect is disclosed." Incredibly, the whole point of the invention is to mitigate (if not actually defeat) the placebo effect. I don't know if anybody else sees this as disturbing. To me it's repulsive.

If you're interested in licensing the patents, RCT Logic will be happy to talk to you about it. Download their white paper and slides. Or just visit the website.

Have antidepressants and other drugs now become so miserably ineffective, so hopelessly useless in clinical trials, that we need to redesign our scientific protocols in such a way as to defeat placebo effect? Are we now to view placebo effect as something that needs to be made to go away by protocol-fudging? If so, it puts us in a new scientific era indeed.

But that's where we are, apparently. Welcome to the new world of wonder drugs. And pass the Tic-Tacs.

Saturday, March 23, 2013

Placebos Are Becoming More Effective

Recent reports in the literature tell of a huge crisis that has overtaken drug research. Placebos are becoming more effective, and researchers don't know what do to about it.

If placebos are inert, how can they be said to be getting "more effective"? The answer is contained in articles like the one by Walsh et al. that appeared in the April 10, 2002 issue of JAMA (here) entitled "Placebo Response in Studies of Major Depression: Variable, Substantial, and Growing." The authors of this paper did a meta-analysis of 75 studies on drug treatments for major depressive disorder (MDD) that were published between 1981 and 2000. Criteria for inclusion of studies in the meta-analysis were:
  1. Papers must have been published in English
  2. Published between January 1981 and December 2000
  3. Studies primarily composed of outpatients with Major Depressive Disorder (not bipolar)
  4. Had at least 20 patients in the placebo group
  5. Lasted at least 4 weeks
  6. Randomly assigned patients to receive an antidepressant drug (or drugs) and placebo and assessed patients under double-blind conditions
  7. Reported the total number of patients assigned to placebo and medication group(s) as well as the number who had "responded" to treatment, as determined by a reduction of at least 50% in their score on the Hamilton Rating Scale for Depression (HRSD) and/or a Clinical Global Impression (CGI) rating of markedly or moderately improved (CGI score of 1 or 2)
The results (with regression curves) are depicted in the scatter-plot shown below.

The response to placebo in these studies varied from 10% to more than 50%. Over time, placebo response increased by an average of 7% per decade, going from 21% in 1981 to 35% in 2000.

Interestingly, it wasn't just placebo response that went up in this time frame. Drug response went up too, at nearly the same rate as placebo response.

It should be noted that in 38 of the 75 studies included in the above meta-analysis, patients were allowed to take "concomitant medications" (mostly benzodiazepines) for insomnia and anxiety throughout the studies. This is, unfortunately, a common practice in antidepressant trials and tends to confound results, since sleeping better (by itself) can swing a person's HRSD score by 6 points. (The average HSRD score of patients in the 75 studies was 22.5.)

It's also worth noting that 74.7% of trials in the above meta-analysis allowed for a one- to two-week placebo run-in period, to eliminate "placebo responders." (In this period, subjects are given placebo only; anyone who improves on placebo during the run-in phase is then removed from the study group.) It's a way to skew results in favor of the drug, and reduce placebo scores. But it obviously doesn't work very well.

The Walsh meta-analysis is 11 years old, but more recent studies have found the same results, except that drug response doesn't go up as fast as placebo response. For example, a 2010 report by Khan et al. found that the average difference between drug and placebo in published antidepressant trials has gone from an average of 6 points on the HRSD in 1982 to just 3 points in 2008. In the U.K., antidepressants are considered not to meet clinical usefulness standards (as set by the National Institute for Health and Clinical Excellence) when there is less than a 3-point HRSD difference between drug and placebo. (In the well-known 2008 meta-analysis by Kirsch et al., it was found that SSRIs improved the HRSD score of patients by only 1.8 points more than placebo.)

Rising placebo response rates are seen as a crisis in the pharma industry, where psych meds are runaway best-sellers, because it impedes approval of new drugs. Huge amounts of money are on the line.

In a January 2013 paper in The American Journal of Psychiatry, Columbia University's Bret R. Rutherford and Steven P. Roose try to find explanations for the mysterious rise in placebo response rates. The causal factors they came up with are shown below.

Increase Placebo Response Decrease Placebo Response Strength of Evidence
More study sites Fewer study sites Strong
Poor rater blinding Good rater blinding Strong
Multiple active treatment arms Single active treatment arm Strong
Lower probability of receiving placebo Higher probability of receiving placebo Strong
Single baseline rating Multiple baseline ratings Medium
Briefer duration of illness in current episode Longer duration of illness in current episode Medium
More study visits Fewer study visits Medium
Recruited volunteers Self-referred patients Weak
Optimistic/enthusiastic clinicians Pessimistic/neutral clinicians Weak
 from Rutherford & Roose, Am J Psychiatry. 2013 Jan 15

Of all the factors discussed by Rutherford and Roose, the one that has the most potential for explaining the rise in placebo response is the way volunteers are recruited into studies. In the 1960s and 1970s, study participants were mostly unpaid inpatients. Today they're paid outpatients, recruited via advertising. In days past, studies were conducted mostly by universities or research hospitals. Today it's almost all outsourced to CROs (Contract Research Organizations), who attract clientele via web, radio, magazine, and newspaper advertising. More often than not, CRO study volunteers are people without medical insurance looking to make extra money while getting free medical care to boot. Most are only too happy to tell doctors whatever they want to hear.

It's well known that most drug trials have a hard time getting enough sign-ups (this is mentioned in Ben Goodacre's Bad Pharma as well as Irving Kirsch's The Emperor's New Drugs). This creates an incentive for initial-assessment interviewers to score marginal volunteers (e.g. those who are on the borderline of meeting minimum HSRD scores for depression) liberally, to get them into a study. That can be enough to degrade signal disastrously, in studies that look for a difference between an antidepressant and a placebo. Kirsch and others have shown that drug/placebo differences are relatively faint and hard to detect in less-severely-depressed patients, across a large number of different antidepressant types, new and old.

Which brings up a rather embarrassing point. If antidepressants were anywhere near as effective as drug companies would have us think, no one should have to squint very hard to distinguish their effects from a placebo. The fact that the drug industry has a "placebo crisis" at all is telling. It says the meds we're talking about aren't powerful at all. They're nothing like, say, insulin for diabetes.

Whatever the underlying cause(s) of the Placebo Crisis, drug companies consider it very real and very troubling. They're willing (as usual) to do whatever it takes to make the problem go away. And that includes some interesting new kinds of salami-slicing in the area of experimental protocol design. In tomorrow's blog: a look at efforts to patent dodgy protocol designs.

Buy my books (or click to find out more, at least):

Have you added your name to our mailing list?

Friday, March 22, 2013

The "Good Data" Problem in Science

In science today, there's a huge problem: Way too many people are publishing way too many results that are way too positive.

This problem isn't imaginary or speculative. It's rampant and well documented. Literature surveys have repeatedly uncovered a statistically unlikely preponderance of "statistically significant" positive findings in medicine [see Reference 1 below], biology [2], ecology [3], psychology [4], economics [5], and sociology [6].

Positive findings (findings that support a hypothesis) are more likely to be submitted for publication, and more likely to be accepted for publication [7][8]. They're also published quicker than negative ones [9]. In addition, only 63% of results presented initially in Abstracts are eventually (within 9 years) published, and those that are published tend to be positive [10]. There's some evidence, too, that higher-prestige journals (or at least ones that get cited more often) publish research reporting greater "effect sizes" [11].

Why is this a problem? When only positive results are reported, knowledge is being hidden. A false picture of reality is presented (e.g., a false picture of the effectiveness of particular drugs or treatment options). Lack of informed decision-making in health care is a serious issue, since lives are at stake. But even when lives are not at stake, theories go awry when a hypothesis is verified in one experiment yet refuted in another experiment that goes unpublished because "nothing was found."

Another reason under-reporting of negative findings is a pressing problem is that meta-analyses and systematic literature reviews, which are an increasingly common and important tool for understanding the Big Problems in medicine, psychology, sociology, etc., suffer from a classic GIGO conundrum: The meta-analyses are only as good as the available input. If the input is fundamentally flawed in some way, odds are good that the meta-analysis will also be flawed. 

In some cases, scientists (or drug companies) simply fail to submit their own negative findings for publication. But research journals are also to blame for not accepting papers that report negative results. Cornell University's Daryl Bem famously raised eyebrows in 2011 when he published a paper in The Journal of Personality and Social Psychology showing that precognition exists and can be measured in the laboratory. Of course, in reality, precognition (at least of the type tested by Bem) doesn't exist, and the various teams of researchers who tried to replicate Bem's results were unable to do so. When one of those teams submitted a paper to The Journal of Personality and Social Psychology, it was flatly rejected. "We don't publish replication studies," the team was told. The researchers in question then submitted their work to Science Brevia, where it also got turned down. Then they tried Psychological Science. Again, rejection. Eventually, a different team (Galak et al.) succeeded in getting its Bem-replication results (which were negative) published in The Journal of Personality and Social Psychology, but only after the journal came under huge criticism from the research community. (Find more on this fracas here and here and here.)

Aside from the dual problems of researchers not submitting negative findings for publication and journals routinely rejecting such submissions, there's a third aspect to the problem, which can be called outcome reporting bias: putting the best possible spin on things. This takes various forms, from changing the chosen outcomes-measure after all the data are in (to make the data look better, via a different criterion-of-success; one of many criticisms of the $35 million STAR-D study of depression treatments), "cherry-picking" trials or data points (which should probably be called pea-picking in honor of Gregor Mendel, who pioneered the technique), or the more insidious phenomenon of HARKing, Hypothesizing After the Results are Known [12], all of which often occur with selective citation of concordant studies [13].

No one solution will suffice to fix this mess. What we do know is that the journals, the professional associations, government agencies like FDA, and scientists themselves have have tried in various ways (and failed in various ways) to address the issue. And so, it's still an issue.

A deterioration of ethical standards in science is the last thing any of us needs right now. Personally, I don't want to see the government step in with laws and penalties. Instead, I'd rather see professional bodies work together on:

1. A set of governing ethical concepts (a explicit, detailed Statement of Ethical Principles) that everyone in science can (voluntarily) pledge to abide by. Journals, drug companies, and researchers need to be able to point to a set of explicit guidelines and be able to go on record as supporting those specific guidelines.

2. Journals should make public the procedures by which disputes involving the publishability of results, or the adequacy of published results, can be aired and resolved.

In the long run, I think Open Access online journals will completely redefine academic publishing, and somewhere along the way, such journals will evolve a system of transparency that will be the death of dishonest or half-honest research. The old-fashioned Elsevier model will simply fade away, and with it, most of its problems.

Over the short run, though, we have a legitimate emergency in the area of pharma research. Drug companies have compromised public safety with their well-documented, time-honored, ongoing practice of withholding unfavorable results. The ultimate answer is something like what's described at It's that, or class-action hell forever.


  • 1.Kyzas PA, Denaxa-Kyza D, Ioannidis JPA (2007) Almost all articles on cancer prognostic markers report statistically significant results. European Journal of Cancer 43: 2559–2579. [here]
  • 2.Csada RD, James PC, Espie RHM (1996) The “file drawer problem” of non-significant results: Does it apply to biological research? Oikos 76: 591–593. [here]
  • 3.Jennions MD, Moller AP (2002) Publication bias in ecology and evolution: An empirical assessment using the ‘trim and fill’ method. Biological Reviews 77: 211–222. [here]
  • 4.Sterling TD, Rosenbaum WL, Weinkam JJ (1995) Publication decisions revisited - The effect of the outcome of statistical tests on the decision to publish and vice-versa. American Statistician 49: 108–112. [here]
  • 5.Mookerjee R (2006) A meta-analysis of the export growth hypothesis. Economics Letters 91: 395–401. [here]
  • 6.Gerber AS, Malhotra N (2008) Publication bias in empirical sociological research - Do arbitrary significance levels distort published results? Sociological Methods & Research 37: 3–30. [here]
  • 7. Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ (2000) Publication and related biases. Health Technology Assessment 4 [here]
  • 8.Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, et al. (2008) Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 3: e3081. [here]
  • 9.Hopewell S, Clarke M, Stewart L, Tierney J (2007) Time to publication for results of clinical trials (Review). Cochrane Database of Systematic Reviews. [here]
  • 10.Scherer RW, Langenberg P, von Elm E (2007) Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews. [here]
  • 11.Murtaugh PA (2002) Journal quality, effect size, and publication bias in meta-analysis. Ecology 83: 1162–1166. [here]
  • 12.Kerr, Norbert L. (1998) HARKing: Hypothesizing After the Results are Known, Personality and Social Psychology Review, 2(3):196-217. [here]
  • 13.Etter JF, Stapleton J (2009) Citations to trials of nicotine replacement therapy were biased toward positive results and high-impact-factor journals. Journal of Clinical Epidemiology 62: 831–837. [here]
  • Wednesday, March 20, 2013

    How Blind is Double-Blind?

    The double-blind randomized control trial (RCT) has been the gold standard of clinical research for the last fifty years. But the double-blind RCT might just as well be called an aluminum standard or a lead standard if blinds are regularly broken. Which they are.

    Relatively few studies ask participants, at the end (or in the middle) of the study, whether they were able to guess which treatment arm they were in (placebo versus active treatment). In studies that do, it's surprising how many patients (and clinicians, the other half of the double blind) are, in fact, easily able to guess which group they were in.

    In one double-blind study by Rabkin et al. (Psychiatry Research, Vol. 19, Issue 1, September 1986, pp. 75–86), 137 depression sufferers were divided into groups and given placebo, imipramine (a tricyclic antidepressant), or phenelzine (a monoamine oxidase inhibitor) for six weeks. At the end of the study, patients and doctors were asked to guess which groups they were in. Some 78% of patients and 87% of doctors correctly guessed whether they were dealing with a placebo or a medicine. The authors had this to say:
    Clinical outcome, treatment condition, and their interaction each contributed to guessing accuracy, while medication experience and side effects assessed only in week 6 did not. Accuracy was high, however, even when cases were stratified for clinical outcome, indicating that other cues were available to the patients and doctors. These may include patterns and timing of side effects and clinical response not detectable in this end-point analysis. 
    Another double-blind study, reported by Margraf et al. in the Journal of Consulting and Clinical Psychology (1991, Vol 59, No. 1, pp. 184-187), involved 59 panic-disorder patients who got either placebo, imipramine, or alprazolam (Xanax). Halfway into the study (four weeks in), patients and physicians were asked to guess what they were taking. Among patients, 95% of those taking an active drug guessed they were taking an active drug and 44% of placebo patients guessed they were taking placebo. That's not so unusual, considering that the drugs involved (Xanax, especially) are strong, and strongly therapeutic for the specific disorder being studied (panic disorder). What was interesting was that physicians (who were not subject to the effects of the drugs) also guessed correctly at a high rate (89% for imipramine, 100% for alprazolam, 72% for placebo; effectively rendering the study a single-blind rather than double-blind trial). What's more, imipramine patients were able to discriminate between imipramine and alprazolam at a higher rate than the Xanax users. Among subjects who got imipramine, 71% guessed imipramine. Only 50% of alprazolam patients guessed correctly.

    It could be argued that guessing whether you're taking an anxiolytic or not isn't really that hard. Drugs like caffeine and nicotine, on the other hand, are more subtle in their effects and have been shown to have very high rates of placebo responsiveness in studies of their effects. (People's hearts race, they get jittery, etc. when you give them decaf and tell them it's regular coffee.) So it would be more important to look at those kinds of studies than other kinds. Arguably.

    Mooney et al., in a 2004 paper in Addictive Behaviors, reported the results of a meta-analysis of double-blind nicotine-replacement-therapy studies. In 12 of 17 studies analyzed, participants were able to guess treatment assignments at rates significantly above chance. This was even true of patch therapies, where nicotine is released very slowly into the bloodstream over a long period of time.

    Blind breakage is potentially damaging to the credibility of any study, but it's especially damaging for studies in which placebo response is intrinsically high and drug efficacy is slight, because even a slight boost in perceived efficacy (from blind penetration) can hugely affect results. It's well known that patients who expect positive results from a treatment tend to experience positive outcomes more often (whether from placebo or drug) than patients who expect negative results. This effect can make a low-efficacy drug appear more efficacious than it actually is once the blind is broken. Irving Kirsch and others have called this the "enhanced placebo effect." Kirsch invokes "enhanced placebo effect" to explain the effectiveness of antidepressants, which owe 70% to 80% of their efficacy to ordinary placebo effect. The other 20% to 30% of their effectiveness, Kirsch says, comes not from drug action but enhanced placebo effect. Patients in antidepressant studies, once they begin to notice side effects like sexual dysfunction or gastrointestinal trouble, know that they're getting an active drug; and they tend to respond better because of that knowledge.

    Greenberg et al. did a meta-analysis of antidepressant studies (N = 22) that were subject to stricter blinding requirements than the typical double-blind RCT. (These were rigorously controlled 3-arm studies in which there was a placebo control arm and two active-treatment arms: one including the new drug under study and one including an older "reference" medication.) The authors came away with three conclusions (these are direct quotes):

    1. Effect sizes were quite modest and approximately one half to one quarter the size of those previously reported under more transparent conditions.

    2. Effect sizes that were based on clinician outcome ratings were significantly larger than those that were based on patient ratings. 

    3. Patient ratings revealed no advantage for antidepressants beyond the placebo effect. 

    What can be done about the blind-breakage problem? Some have advocated using a study design that has three or even four arms (placebo, active placebo, reference drug, new drug), or even five arms (if you include a wait-list group that gets absolutely nothing, but whose progress is nonetheless monitored). Some have actually proposed backing out correct-guessers' data entirely from studies after they're conducted. Others have suggested giving the active drug surreptitiously so the patient doesn't even know he or she is getting anything at all. (But this brings up ethical issues with informed consent, potentially.) There's also a concept of triple-blindness.

    Drug companies have adopted an interesting approach, which is to do a pre-study to identify "placebo responders" -- then eliminate them from the actual study. 

    There are also those (e.g., Irving Kirsch) who have suggested a two-by-two matrix design, consisting of placebo and control groups which are then split into two arms, one totally blind, the other deliberately misinformed (placebo-receivers told that they've gotten active drug and active-drug-receivers told that they've gotten placebo).

     In the excellent paper by Even et al. (The British Journal of Psychiatry, 2000, 177: 47-51), the authors present a seven-point Blindness Assessment and Protection Checklist (BAPC) and recommend that researchers provide BAPC analysis as part of their published papers.

    My opinion? If degree-of-blindness is measurable (which it is), then researchers should at least measure it and disclose it as part of any study that's purported to be "blinded." Otherwise, all of us (researchers, clinicians, academics, students, and lay public) are truly operating in the blind.

    Tuesday, March 19, 2013

    The 10 Most-Prescribed Drugs in the U.S.

    And the winner is . . .
    Abilify may be the best-selling drug in the U.S. by dollar volume, but in terms of sheer numbers of prescriptions, it doesn't even come close to the prescription-drug leader: hydrocodone compounded with paracetamol. (You might know it as Vicodin, although it comes by many other names.) It's a Schedule II narcotic, basically a synthetic opiate for pain management. It's been the best-selling drug in the U.S. for at least ten years.

    The rest of the Top Ten in terms of number of prescriptions filled, according to WebMD (2010 data):

    Drug Name and Its Uses Number of Prescriptions
    Hydrocodone/paracetamol 131.2 million
    Generic Zocor (simvastatin), cholesterol lowering statin 94.1 million
    Lisinopril (brand names include Prinivil and Zestril), blood pressure control 87.4 million
    Generic Synthroid (levothyroxine sodium), synthetic thyroid hormone 70.5 million
    Generic Norvasc (amlodipine besylate), angina/blood pressure control 57.2 million
    Generic Prilosec (omeprazole), antacid 53.4 million
    Azithromycin (brand names include Z-Pak and Zithromax), antibiotic 52.6 million
    Amoxicillin (various brand names), antibiotic 52.3 million
    Generic Glucophage (metformin), diabetes 48.3 million
    Hydrochlorothiazide (various brand names), diuretic, blood pressure control 47.8 million

    Hydrocodone's status as the top prescription drug in the U.S. will come as no surprise to anyone who's been inside an American drug rehab center. Hydrocodone and its virtually identical twin, oxycodone, are widely used recreationally (and for committing suicide). Where I live, in Florida, there are "Pain Clinics" on all the major highways. These "clinics" accept no insurance; cash only. Most are walk-in facilities (they don't require an appointment). You go in, pay your $250 to $300, see a doctor for a few minutes, pretend you're in pain, tell him or her which recreational drug you're interested in (Xanax, Vicodin, and Roxicet, etc.), and you leave with a prescription in your pocket a few minutes later. If you want to see how this works, with your own eyes, right now, just go to and watch the Peabody-Award-winning documentary The Oxycontin Express.

    As for the other drugs in the chart: Most of the names should come as no surprise, given that heart disease is still (today, as a hundred years ago, before these medications existed) the No. 1 cause of death in the United States. Diabetes is the No. 7 cause of death in America (and is killing about 3% more people every year), hence metformin's strong showing in the chart is not unexpected. Likewise, you'd expect an antibiotic or two to be in the best-seller list.

    Monday, March 18, 2013

    Life Expectancy for U.S. Women Heading Down?

    In a previous post I predicted that U.S. life expectancies will peak soon and begin to go down by 2020. This process has already begun, for U.S. women. A report this month in Health Affairs offered this chilling announcement:
    We examined trends in male and female mortality rates from 1992–96 to 2002–06 in 3,140 US counties. We found that female mortality rates increased in 42.8 percent of counties, while male mortality rates increased in only 3.4 percent. 
    This is not the first study of its kind to report such a result. The New York Times devoted a story to the decline in female life expectancies last September (wherein it quoted from an August 2012 Health Affairs piece), and in June 2011 a far more extensive report appeared in Population Health Metrics, with similar findings. Even before that, a 2008 PLoS report by researchers from Harvard, U.C. San Francisco, and the University of Washington found life expectancy for U.S. women "began to level off or even decline in the 1980s for 4 percent of men and 19 percent of women." Bottom line, this is a process that started many years ago, and it has been verified repeatedly by different teams of researchers.

    Counties in which female life expectancy is heading down are shown in red.

    Some of the decline in life expectancy has been linked to educational level. In 1990, the difference in life expectancy between the most educated white females and the least educated was about 2 years. Now it's 10.4 years. No one has explained why lack of education should be deadlier today than in 1990. It's true that less-educated women smoke more tobacco than better-educated women, but that was true in 1990 as well. And anyway, in order to explain a life-expectancy delta of 10.4 years, one hundred percent of less-educated women would have to smoke, and one hundred percent of educated women would have to be lifetime non-smokers. Which is pretty far from the case.

    Some experts have tried to pin the blame on rising obesity, but this is likely a red herring as well. For example, obesity rates for women barely changed from 1999 to 2010. It's interesting to note, too, that obesity is more prevalent among Hispanic women than non-Hispanic white women (by a solid margin) and yet life expectancy for Hispanic U.S. females is 83.7 years (CDC data), far higher than the national average (for U.S. women) of 81.1 years.

    What's changed for women in the last 30 years? In a macro sense, the biggest change is that they've entered the workforce in huge numbers. Female participation in the labor force went from under 40% in 1960 to just over 60% in 1997 (where it's stayed ever since). It stands to reason that if you acquire the lifestyle habits of men (such as working outside the home every day), you'll perhaps be exposed to the same stressors that men are exposed to and acquire some of the same mortality risks. Especially if you're working harder, for less money.

    Also, a variety of sources say that women suffer depression at twice the rate of men. Why is this important? Because mental illness is associated with higher mortality.

    Finally, many people consider access to health care a women's issue. For example, women earn less than men yet pay more for health care. If women have poorer access to health care than men, this could partly explain the deterioration in female life expectancy.

    There are no doubt other factors to consider. The greater question is whether the number of counties in which female life expectancy is on the decline will continue to grow until it includes nearly every county in the U.S., or whether the current trend represents a demographic split of some kind (between the well-educated and the low-educated, or between high earners and low earners). It also remains to be seen whether male life expectancies will also soon start to go down across the country. I suspect we'll see some interesting demographic trends soon, showing white Americans to be doing particularly poorly compared to non-whites. (Blacks and Hispanics are still making strong life-expectancy gains.) Time will tell.