Thursday, February 28, 2013

Gmail Ads: Harmful to Mental Health

Google's use of "interest-based ads" alongside Gmail's windows may seem annoying (and it is), but for a certain group of people it's more than annoying. It's actually harmful. 

Living with someone who has paranoid schizophrenia has taught me a lot about the condition. One thing I've learned is that schizophrenia's symptoms are complex and varied, and antipsychotic drugs do not ameliorate all symptoms equally. Hallucinations tend to respond better to the drugs than paranoia. That's Fact Number One.

This "interest based" ad has nothing whatsoever to do with my interests.

Fact Number Two is that paranoid schizophrenia is the most common type of schizophrenia. 


Fact Number Three is that even in paranoid schizophrenia patients who have made a good recovery, episodes of paranoia can still sometimes be triggered by certain external stimuli. This is where Gmail comes into play.

Every Gmail user, by now, has experienced the discomfiting deja vu that arises from seeing personalized ads next to an open e-mail. Two days ago, you may have written an e-mail to a friend in which you mentioned water pistols. Today, you write an e-mail to someone else and notice (along the right edge of the window) an ad with a blue headline: "Best prices on Glock semi-automatics." This is exactly the kind of thing that causes a person with paranoid schizophrenia to go into an ever-spiraling panic. Is Gmail reading my thoughts? Do Google engineers suspect me of murder? Am I being spied on by Homeland Security agents posing as Google employees? Did my friend that I wrote to two days ago betray me?

Am I being urged to commit suicide with a handgun?


By Google's own estimate, more than 425 million people worldwide use Gmail. It's likely that at least 3 million of them have schizophrenia. Many of those people have serious ongoing paranoia issues, even if they're responding well to medication. The last thing they need is to see paranoia-inducing sidebar links in their e-mail client every day.


By the way, the screenshot above is one I took this morning while viewing an e-mail to me that had nothing whatsoever to do with firearms. For whatever reason, Gmail decided to show me an "interest based ad" on firearms training. Why? I don't know. I don't own a gun, don't plan on owning a gun, don't have any interest whatsoever in firearms training. No ad could be less "interest based" than this one.

The problem here is that you can't turn sidebar ads off. "Web Clip" ads at the top of your e-mail window can be turned off (see this article to learn how), but the "interest based" ads cannot be turned off. At https://www.google.com/settings/u/0/ads/preferences/?hl=en you'll find: "Google tries to show you the most relevant ads, whether or not you're opted in to seeing personalized ads."

If Google wants to do the right thing, it needs to provide paranoia sufferers a way to turn off all personalized ads. The way things stand now, Google's spooky interest-based ads are merely an annoyance to most of us. But for some, they're a threat to mental health.

Wednesday, February 27, 2013

Why Life Expectancies Will Soon Go Down

Obesity rates began trending sharply higher in 1980.
Coincidentally, 1980 was the year of the first Dietary
Guidelines for Americans by the Department
of Health and Human Services.
Recently, I wrote about the fact that if we could eliminate all heart disease and all cancer overnight, human life expectancy would increase only slightly: just 6.7 years if heart disease is eliminated, 3.3 years if cancer is eradicated. (Those are CDC's numbers.) It seems clear that to extend life expectancies much beyond, say, 100 years will require more than merely eliminating the most common causes of death. It will require re-engineering the human body to be a good deal more robust in terms of its self-repair capabilities.

In my previous post, I hinted at the fact that today, it takes a much larger change in mortality rates to move the life-expectancy needle than it used to. The non-straightforward relationship of life-expectancy delta to mortality-rate delta is modeled in terms of something called Keyfitz Entropy, and the strange "diminishing returns" effect has been called Taeuber's Paradox (after public health researcher Conrad Taeuber). If you'd like more background on this, I suggest you start with the 2001 Science article by Olshansky et al. ("Prospects for Human Longevity") which shows that, because of Keyfitz/Taeuber effects, it's unlikely the U.S., at its current rate of mortality progress, will see 100-year life expectancies until the year 2485 (for women) or 2577 (for men).

Over the short term, I remain quite pessimistic. Dreamers like Ray Kurzweil are predicting immortality by 2040. My prediction is quite different. In the United States, we should expect to see life expectancies peaking right about now and start trending downward by 2020, based mostly on the sudden jump in obesity that began in 1980 (and the doubling of diabetes in the last 15 years).

Some 35.7% of adult Americans are obese and another 33% are overweight but not obese. The numbers for children are starting to approach those for adults. And we're continuing to trend sharply in the wrong direction. This will show up as an unwelcome drop in life expectancy when all the follow-on effects of epidemic obesity and diabetes begin to be realized.

It's important to note that overweight is a predictor of
  • Type 2 diabetes
  • hypertension
  • stroke
  • coronary artery disease
  • pulmonary embolism
  • asthma
  • chronic back ailments
  • osteoarthritis (and related hip and knee replacement)
  • gallbladder disease
  • obstructive sleep apnia
  • colorectal cancer
  • kidney cancer
  • pancreatic cancer
  • endometrial cancer
  • ovarian cancer
  • post-menopausal breast cancer
Those are just the possible somatic outcomes. Psychiatric comorbidity is also extremely common with obesity (and with chronic illness of any kind).

Smoking has been on the decline for decades but is starting to level off at around 15% of the U.S. population. Heart disease has decreased in parallel with the decades-long drop in tobacco consumption (although lung cancer rates, for some unknown reason, have merely leveled off and not gone down).

There are some important lessons to be learned here.
  • Existing public policy efforts in combating obesity have been, and continue to be, an abject failure. Weight gain in the U.S. is out of control, across most demographics, including young children. A sharp increase in rate of weight gain started around 1980. The change in slope after 1980 doesn't correspond to a sharp increase in TV-watching, online gaming, or many of the other (entirely speculative) causal factors usually cited for obesity. There haven't been any sharp increases in causal factors other than calorie intake.
  • Obesity is a proven risk factor for at least 16 serious illness types (not counting the adverse outcomes associated with diabetes). Because of the lead time required for these diseases to develop, we have not yet begun to see the full impact of the 1980 increase in the rate of weight gain.
  • Much of the progress made against heart disease over the last 40 years has been driven by reduced cigaret smoking. But smoking rates are leveling off now (at around 15%) and aren't likely to go much lower, for a variety of reasons. Most of the low-hanging fruit, in heart-disease and cancer prevention, has already been picked. We shouldn't expect dramatic lowering of the mortality rates for heart disease or cancer (the No. 1 and No. 2 causes of death) going forward.
  • When obesity knock-on effects (still in the pipeline) show up as increased mortality from diabetes, stroke, heart attack, cancer, etc., life expectancy in the U.S. will actually start going down.
It should be obvious that the single most important thing we can do right now to increase overall life expectancy in the U.S. is to get people to lose weight and teach their children to eat right. For a lot of reasons, I'm pessimistic on both of those. Corporations have no incentive to make children want to eat right; parents aren't doing the job; current public-health policies aren't doing the job; and it's politically unpopular in the U.S. right now to have the government step in with meaningful social programs.

In upcoming posts, I want to switch gears and start talking about the molecular biology of aging and the sorts of things we can do now (not 25 years from now) to extend life expectancy by 20%, 30%, or more. Stay tuned for details.

Monday, February 25, 2013

Taeuber's Paradox and the Life Expectancy Brick Wall

A friend and I were talking the other day about mortality, morbidity, and other cheerful topics, and I happened to mention to him Keyfitz's classic 1977 paper on Taeuber's Paradox (for which there is, at the time of this writing, no Wikipedia page, mercifully), which in turn sprang from the somewhat counterintuitive finding that if cancer were eliminated as a cause of death, it would yield an increase in life expectancy of only a little more than 3 years. Maybe not everybody will find this result counterintuitive. It's by no means certain that Conrad Taeuber himself did. But Nathan Keyfitz did, and I did, and my friend Jeff did; and so, for purposes of this discussion, that's a quorum.

Cancer is not one disease, of course. Like "heart disease," it's a multiplicity of unspeakably terrible ailments. Nevertheless we count it as one disease in discussions of mortality in this country, so that we can point at it and say "Cancer is the Number Two cause of death in America," and then presidents can declare war on it, $10 billion a year in taxpayers' money can be set aside for research on it (approximately $500 billion in 2012 dollars spent since Nixon declared war), a $50-billion-a-year commercial industry of toxic therapies (some of which cost $10,000 a month) can be built around it, and meanwhile delusional goofballs like Ray Kurzweil can talk of achievable immortality (with arguments that don't even come close to passing the straight-face test) when there's no cancer cure in sight. (I don't consider transplanting my brain into silicon to be the same as achieving immortality, incidentally.)

It might do the Kurzweils of the world some good to spend a little time pondering the fact that roughly $20,000 in anti-cancer research money has been spent for every single person in the U.S. who has died of cancer in the last 40 years; and yet after 40 years, cancer is still the No. 2 cause of death in America; and after it's gone, after it's cured once and for all, this bane of human existence, this No. 2 Cause of Death, we will have extended human life a grand total of (drum roll, please) 3.3 years (loud cymbal-crash).

One reason eliminating such a significant cause of death has such a miniscule impact on life expectancy is that other causes of death rush in to fill the void. If you're 75 years old, suddenly eliminating cancer as a cause of death still leaves you with all the other killer diseases that make 75-year-olds go tits-up. It's more complicated than that, of course. One thing you have to consider is that eliminating a disease of later life has much less effect on life expectancy than eliminating an early-in-life disease. If you can prevent a fatal disease of childhood, the contribution to average life expectancy is much greater than if you can cure a disease that only befalls 90-year-olds. This is why life expectancies rose so sharply in the first years of the 20th century (and why we're not likely to see such a surge repeated any time soon). Starting in the early 1900s, killer diseases of early childhood (and early adulthood) began to abate one by one.

Bottom line, the calculation of Potential Gain in Life Expectancy (PGILE) is far from straightforward, because you need to know the mortality rate for the illness-in-question for every year of a person's life, and depending how that curve shapes out, you get a final PGILE number that's bigger or smaller than you might have guessed based on the illness's overall ranking in national causes of death.

Back in 1999 (but unfortunately not since then), the Centers for Disease Control, using 1990 Census data (and other data of the time), published information on the potential gain in life expectancy to be expected if various categories of death were eliminated. The numbers are shown in the table below.

CATEGORY OF DEATH POTENTIAL GAIN IN LIFE EXPECTANCY (YEARS) IF ELIMINATED
CARDIOVASCULAR: All cardiovascular diseases 6.73
CANCER: Malignant neoplasms, including neoplasms of lymphatic and hematopoietic tissues, AIDS, etc. 3.36
Diseases of the respiratory system 0.97
Accidents and "adverse effects" (health-care-induced deaths) 0.92
Diseases of the digestive system 0.46
Infectious and parasitic diseases 0.45
Firearm deaths 0.4
Certain conditions originating in the perinatal period 0.33
Suicide 0.3
Homicide and "legal intervention" (law-enforcement and penal-system-induced deaths) 0.29
Diabetes mellitus 0.27
Congenital anomalies 0.2
Alcohol-induced deaths 0.17
Drug-induced deaths (medicinal and recreational drug overdoses) 0.1
Sudden infant death syndrome 0.1
Nephritis, nephrotic syndrome, and nephrosis 0.1
Alzheimer’s disease 0.05
Urinary tract infection 0.04
Non-metastatic neoplasms, and "neoplasms of uncertain behavior and unspecified nature" (medical mysteries, basically) 0.04
Parkinson’s disease 0.03
Senile and presenile organic psychotic conditions 0.03
All others 1.96
TOTAL 17.3
Data taken from U.S. Decennial Life Tables for 1989-91, U.S. Dept. of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics, Volume 1, Number 4.

What the table says is (for example) if we could eliminate cardiovascular disease as a cause of death in America, average life expectancy would go up by 6.73 years. Again, it seems a trifle odd that if you eliminate the No. 1 cause of death in America, a disease category that kills roughly one in three people, life expectancy goes up only 8.6%. But (again), this is partly a reflection of the fact that cardiovascular ailments are (for the most part) not child-killers; they're diseases of middle age and old age. And if you don't die of heart disease, there are plenty of diseases of old age that will still kill you.

So in my own morbid way, I thought it might be a fun exercise if, utilizing CDC's data, we were to total up the potential-gain-in-life-expectancy numbers for all causes of death, to see where we stand in terms of life expectancy if we eliminate all causes of death. (Yes yes, I know I know, the numbers can't just be considered strictly additive, but this is a Gedankenexperiment, so cut me some Gedanken Laxheit.) It turns out the total possible gain in life expectancy from prevention of all causes of death is a rather modest 17.3 years, putting a theoretical limit on U.S. life expectancy of 78.2 + 17.3 == 95.5 years.

I don't for a moment hold this up as any kind of rigorous result. But I do think there is qualitative support here for the general notion that any quest for immortality that's based on mere elimination of current causes of death is fundamentally misguided. The individual PGILE numbers (as much as their sum) hint strongly at the idea that to extend human life significantly will mean doing far more than merely preventing the preventable causes of death (even if we consider the top 15 causes of death all "preventable" in one way or another, which of course many of them are not).

Exactly what that means, I'll leave as an exercise for the reader -- and will expound on in a separate blog. Assuming, of course, I live that long.


Sunday, February 24, 2013

The Comorbidity Crisis

The Comorbidity Crisis is not exactly a household word (yet), but I'm betting it will catch on. Multiple morbidity (presence of two or more medical conditions in a given patient at a given time) is increasingly common, and it's creating a kind of secondary health-care crisis of its own.

Approximately 75 million people in the U.S. have two or more chronic conditions, defined as "conditions that last a year or more and require ongoing medical attention and/or limit activities of daily living." [source] Some 65% of health care spending is directed at this 24% of the population. That's for the population as a whole. For the elderly, multiple morbidity is even more problematic. About 80% of Medicare spending goes to patients with four or more chronic conditions, with costs increasing exponentially as the number of chronic conditions increases. [source]

Multimorbidity is steadily getting worse over time, not just because the population is aging but because we're all getting sicker (or at least, showing up at the doctor's office with more complaints).
  • A Dutch study found that while the prevalence of chronic diseases doubled between 1985 and 2005, the proportion of patients with four or more chronic diseases increased in this period by approximately 300%.
  • The number of Americans receiving drugs for depression went from under 100,000 in 1955 to 13.3 million in 1996 to 27.0 million in 2005, and we now know that 68% of the mentally ill are comorbid for a physical ailment.
  • From 1995 to 2010, the age-adjusted prevalence of diabetes increased by over 50% in 42 states and by more than 100% in 18 states. The median prevalence rose from 4.5% to 8.2% in the 1995-2010 time period (almost doubling in 15 years). [source]  Most adults with diabetes have at least one comorbid chronic disease, and as many as 40% have three or more. [source]
  • In one study of 1122 diabetes patients, patients used an average of 13 medications to treat or prevent 8 different medical conditions. Typical diabetic comorbidities include obstructive sleep apnia, retinopathy (eye damage), neuropathy (deterioration of small nerves in extremities), nephropathy (kidney damage), cognitive deterioration (diabetic encephalopathy), and a wide variety of cardiovascular comorbidities.

The top and bottom quintiles of diabetes and obesity co-map.
From http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6145a4.htm

Less obvious is that comorbidity is spreading across various disease types that, in times gone by, wouldn't necessarily have been connected. Who knew, until recently, that gum disease had any connection to cardiovacular disease? Likewise, fifty years ago (when only 13% of the U.S. population was obese) there was no obvious link between being overweight and getting cancer. Today we know that there are statistically significant correlations between overweight (BMI greater than 25) and incidence of endometrial cancer, ovarian cancer, post-menopausal breast cancer, colorectal cancer, kidney cancer, and pancreatic cancer. [source] This is in addition to the well-known links between overweight and hypertension, stroke, coronary artery disease, pulmonary embolism, asthma, gallbladder disease, osteoarthritis, and chronic back pain.

Mental and physical morbidities lead from one to the other in subtle and not-so-subtle ways. Chronic physical ailments with a high "symptom burden" (e.g., chronic pain from arthritis) are often accompanied by depression. Depression, in turn, has been linked to altered immune function (including release of cytokines involved in inflammatory response), which opens the way to physical illness. If you're taking antidepressants, odds are high that you'll gain weight, putting you at greater risk of diabetes and the various cancers mentioned earlier. (One in six patients who take Zyprexa will gain more than 33 pounds in the first two years of use. [source] Moreover, all modern antipsychotics, according to Eli Lilly sales training literature, bring increased risk of diabetes.) It's a world of endless ripple effects.

One untoward outcome of multiple morbidities is polypharmacy, which leads to increased metabolic burden (toxic overload), drug compliance issues (forgetting to take pills), and adverse drug reactions (ADRs). One study found that the risk of ADRs was 2.65-fold higher in patients taking more than four drugs. But many patients take far more than four drugs. "If we apply the relevant CPGs to a 79-year-old woman with osteoporosis, osteoarthritis, type 2 diabetes, hypertension and chronic obstructive pulmonary disease, all of moderate severity," one researcher wrote, "the patient should be taking 12 different medications in 19 daily doses at 5 different times of the day" -- not counting any drugs to be taken "as needed."

The Morbidity Crisis is just getting underway. As DSM-V (due in May) expands the guidelines for mental illness, we're sure to see a continued sharp rise in mental/physical comorbidities; and as the population ages, we'll continue to see more people sick with multiple ailments, taking more drugs for more concurrent illnesses. The burden on the health care system will increase exponentially, and at some point, all of us will be poorer, not just in dollar terms but (very likely) in terms of basic health.

The winners in all this? Big pharma. You may want to buy some drug company stocks now. The really big profits are straight ahead.



RevenuesProfits
RankCompanyFortune 500 rank$ millions
$ millions
1Johnson & Johnson3361,897.0
12,266.0
2Pfizer4050,009.0
8,635.0
3Abbott Laboratories7530,764.7
5,745.8
4Merck8527,428.3
12,901.3
5Eli Lilly11221,836.0
4,328.8
6Bristol-Myers Squibb11421,634.0
10,612.0
7Amgen15914,642.0
4,605.0
8Gilead Sciences3247,011.4
2,635.8
9Mylan4125,092.8
232.6
10Genzyme4584,515.5
422.3
11Allergan4594,503.6
621.3
12Biogen Idec4714,377.3
970.1

Saturday, February 23, 2013

The Myth of the Medical Breakthrough

There's a popular myth afoot (maybe you've been spreading it?) that says that the roughly 50% increase in life expectancy that occurred in Western countries over the past 100 years can be traced directly to advances in medicine. The advent of antibiotics and vaccines, in particular, added years, perhaps decades, to people's lives. Didn't they?

Or did vaccines and antibiotics actually come along rather late in the game, after long-underway drops in infectious disease rates had already run their course?

Perhaps the reason we're not all dying from bubonic plague is that we no longer live with rats in our houses. Vaccines certainly had nothing to do with it.

What do you think? Are vaccines and antibiotics keeping us from returning to the Infectious Disease Age? Or did we emerge from the Infectious Disease Age for other reasons?


Friday, February 22, 2013

Eli Lilly and its Zyprexa Go-to-Market B.S.

Eli Lilly Sales Organization backgrounder.

Weight gain and diabetes are among the small-print risks for all major atypical antipsychotic medications now. But not all atypicals are equal. What patients knew intuitively is now established scientifically: Lilly's Zyprexa produces the worst weight gain of all of the major atypicals. You don't have to dig through the medical literature to find out how bad the situation is. Just ask a friend. Or if you don't have any friends on Zyprexa, go online to the health forums, or to YouTube, where you won't have any trouble locating people who've gained 25% or more in body weight from taking Zyprexa.

Just for fun, you might want to hear how these drugs are sold to doctors. The  video below is by a guy who sold Zyprexa for a living. Have a listen. The spin job begins with Corporate, funnels down through Sales, reaches your doctor, and ends with (guess who?) you.

Warts and the Power of Suggestion

Not long ago I wrote a post (which got over 10,000 views) in which the controversial hypothesis was raised that perhaps the reason so many smokers are getting so much lung cancer, today (far more cases than can any longer be explained by science) is that we're all being programmed by the cancer warnings on cigaret boxes that tell us we are going to get cancer. In other words, maybe the power of suggestion is aggravating the problem.
Human papilloma virus.

I admit it seems ridiculous that merely planting the suggestion of cancer in someone's mind can make a person get cancer. (Still, read the evidence in my previous post.) But we know that the power of suggestion can, in fact, help rid a body of tumors; at least, tumors of a specific kind. The kind called warts.

Warts are, of course, benign papillomas, caused by a virus. And the strange thing about them is, it's been known for many years that patients can make their own warts go away by power of suggestion. This has been verified so many times in the literature that it needs no further argument. Some of the papers are listed below.

Well-known physician-author Lewis Thomas once wrote an engaging essay on this topic. According to Thomas:

There have been several meticulous studies by good clinical investigators, with proper controls. In one of these, fourteen patients with seemingly intractable generalized warts on both sides of the body were hypnotized, and the suggestion was made that all the warts on one side of the body would begin to go away. Within several weeks the results were indisputably positive; in nine patients, all or nearly all of the warts on the suggested side had vanished, while the control side had just as many as ever. It is interesting that most of the warts vanished precisely as they were instructed, but it is even more fascinating that mistakes were made. Just as you might expect in other affairs requiring a clear understanding of which is right and which the left side, one of the subjects got mixed up and destroyed the warts on the wrong side.
Experiments of this sort have not been tried on malignant tumors, for obvious ethical reasons. (You can't very well leave a control group of cancer patients untreated when treatment options exist.) Nevertheless, it makes one wonder. How much do we really know about the mind's control over vascular and immunologic processes?  Maybe there's more to this power-of-suggestion stuff than we might imagine?


References

Ewin, D.M. Hypnotherapy for warts (verruca vulgaris): 41 consecutive cases with 33 cures, Am J Clin Hypn. 1992 Jul;35(1):1-10. http://www.ncbi.nlm.nih.gov/pubmed/1442635

Sinclair-Gieben, A.H.C., Chalmers, D., Evaluation of Treatment of Warts by Hypnosis, Lancet 2: Oct 3 1959; 480-482. Cited and excerpted in media.noetic.org/uploads/files/chapter11.pdf

Smith, Arthur Preston, The Power of Thought to Heal: An Ontology of Personal Faith, Ph.D. Dissertation (1998) Claremont Graduate University, CA http://freehealing.org/dissert.html

Spanos, Nicholas R., Stenstrom, Robert J., Johnston, Joseph C.,  Hypnosis, Placebo, and Suggestion in the Treatment of Warts Psychosomatic Medicine 50:245-260 (1988) http://bscw.rediris.es/pub/bscw.cgi/d4465026/Spanos-Hypnosis_placebo_suggestion_warts.pdf

Tenzel, J.H.,Taylor, R.L., An evaluation of hypnosis and suggestion as treatment for warts. Psychosomatics 1969 Jul-Aug;10(4):252-7. http://www.ncbi.nlm.nih.gov/pubmed/5808618

Vollmer, Hermann, Treatment of Warts by Suggestion, Psychosomatic Medicine March 1, 1946 vol. 8 no. 2 138-142 http://www.psychosomaticmedicine.org/content/8/2/138.abstract

Thursday, February 21, 2013

A Few Thoughts about Semicolons

John Shuttleworth circa 1980.
Did you ever use too much dill in a recipe? Remember how it tasted? Semicolons are like that. Useful in a pinch, distasteful in excess. No one ever sits down to eat a bowl of them.

When I was a rewrite monkey at The Mother Earth News, many geological epochs ago (when men were monkeys), John Shuttleworth, founder of the magazine, chastised me mercilessly for using too many semicolons. How many semicolons was "too many"? Answer: any integer greater than zero.

Shuttleworth associated semicolons with academic writing and stuffy, pompous, pedantic writing in general. If periods were lag bolts and commas were pop rivets, semicolons were (to him) a kind of inferior duct tape, not to be trusted. He was unable to articulate his prejudices on the subject in a convincing fashion. Yet I think I know now what he meant.

I won't go so far as to say (as Shuttleworth did) that semicolons are evil. But I would agree with him that they're overused.

People typically use semicolons as an adhesive to join together two independent clauses. It's a stronger adhesive than a comma, but not as strong as a period. Example:

Deficit spending is an idea that no longer enjoys the kind of support it once did; it is one of many quaint Keynesianisms that have fallen into disfavor.
This is the kind of sentence Shuttleworth hated with the fire of a thousand suns. Why? It's two sentences pretending to be one. What's wrong with that? First, it makes a sentence twice as long, and long sentences suck. Also, semicolons put more work on the reader. A person encountering the semicolon in the above sentence, thinks "Okay, I'm being told to cache the meaning of the 16 words I just read so that I can see how the next statement relates to it." The trouble is, the reader may not want (or be able) to hold the preceding clause in memory, particularly if the followup clause is a long one. You're telling the reader to keep one ball up in the air while grabbing a new one. Why not just hand the reader Ball 1, then hand him Ball 2?
Deficit spending is an idea that no longer enjoys the kind of support it once did. It's one of many quaint Keynesianisms that have fallen into disfavor.
Was there ever any reason to combine these two sentences into one, using a semicolon? No. Not really. So why do it?

I agree with Shuttleworth that using a semicolon to join two independent clauses is a poor idea, seldom justifiable in terms of readability. It's the conventional ("accepted") way to use semicolons, yet it's also the worst possible use-case.

When should you use a semicolon? Here are the use-cases that work for me:

1.
Use semicolons in any list in which the list items come with their own internal punctuation. Example: We made stops in London, England; Geneva, Switzerland; and Paris, France.

2. Use a semicolon to put a longer explanatory thought after a super-short statement that begs further explanation. Example: I don't like coffee; I'm jittery enough without it, and it keeps me awake at night. Shorter thought, then longer thought.

3.  Use a semicolon when you need to add a quick/short clarification or observation to the tail end of a much longer thought. For example: Only ten percent of patients had a full recovery after ten weeks of intensive treatment; a poor outcome by any measure. Longer thought followed by much shorter thought.


Also: When I see myself using more than one semicolon in a paragraph, or in a passage of less than, say, 300 words, I force myself to do a little rewriting to eliminate one or more semicolons. As a reader, I'm annoyed when I see semicolons cropping up everywhere in an otherwise-good piece of writing. I have to think there are others of my kind out there.

It's up to you. Use semicolons with abandon if you dare. But I think for most readers, semicolons are like dill weed. They nearly always spoil the recipe if used too freely.

Wednesday, February 20, 2013

The 957-word sentence

I love to rail against the use of too-long sentences and too-long paragraphs. One of the easiest ways to improve any piece of writing is simply to refactor it into a greater number of (shorter) sentences and paragraphs. Short sentences are easier to parse. They're easier on the writer as well as the reader. What's not to like about them?

Long sentences are an invitation to semantic and linguistic disaster. The more verbiage you ask the reader to "push onto the stack" (in programmer parlance), the more likely it is the reader will forget where you're going before you get there. You're asking the reader to do a lot of work when you present him or her with a too-long sentence.

Want proof? I recently came across the ultimate example of a too-long sentence. Coincidentally, it also makes for a too-long paragraph. 

This longest-sentence-ever is from Vol. 4 of the longest novel ever (at 1.2 million words), Marcel Proust's Remembrance of Things Past (À la Recherche du temps perdu), said by some to be one of the top novels of all time. I present the sentence here as an example of how not to write (unless you're a dilettante writing for a 1913 French audience and you don't care if anyone understands you).

Their honour precarious, their liberty provisional, lasting only until the discovery of their crime; their position unstable, like that of the poet who one day was feasted at every table, applauded in every theatre in London, and on the next was driven from every lodging, unable to find a pillow upon which to lay his head, turning the mill like Samson and saying like him: “The two sexes shall die, each in a place apart!”; excluded even, save on the days of general disaster when the majority rally round the victim as the Jews rallied round Dreyfus, from the sympathy–at times from the society–of their fellows, in whom they inspire only disgust at seeing themselves as they are, portrayed in a mirror which, ceasing to flatter them, accentuates every blemish that they have refused to observe in themselves, and makes them understand that what they have been calling their love (a thing to which, playing upon the word, they have by association annexed all that poetry, painting, music, chivalry, asceticism have contrived to add to love) springs not from an ideal of beauty which they have chosen but from an incurable malady; like the Jews again (save some who will associate only with others of their race and have always on their lips ritual words and consecrated pleasantries), shunning one another, seeking out those who are most directly their opposite, who do not desire their company, pardoning their rebuffs, moved to ecstasy by their condescension; but also brought into the company of their own kind by the ostracism that strikes them, the opprobrium under which they have fallen, having finally been invested, by a persecution similar to that of Israel, with the physical and moral characteristics of a race, sometimes beautiful, often hideous, finding (in spite of all the mockery with which he who, more closely blended with, better assimilated to the opposing race, is relatively, in appearance, the least inverted, heaps upon him who has remained more so) a relief in frequenting the society of their kind, and even some corroboration of their own life, so much so that, while steadfastly denying that they are a race (the name of which is the vilest of insults), those who succeed in concealing the fact that they belong to it they readily unmask, with a view less to injuring them, though they have no scruple about that, than to excusing themselves; and, going in search (as a doctor seeks cases of appendicitis) of cases of inversion in history, taking pleasure in recalling that Socrates was one of themselves, as the Israelites claim that Jesus was one of them, without reflecting that there were no abnormals when homosexuality was the norm, no anti-Christians before Christ, that the disgrace alone makes the crime because it has allowed to survive only those who remained obdurate to every warning, to every example, to every punishment, by virtue of an innate disposition so peculiar that it is more repugnant to other men (even though it may be accompanied by exalted moral qualities) than certain other vices which exclude those qualities, such as theft, cruelty, breach of faith, vices better understood and so more readily excused by the generality of men; forming a freemasonry far more extensive, more powerful and less suspected than that of the Lodges, for it rests upon an identity of tastes, needs, habits, dangers, apprenticeship, knowledge, traffic, glossary, and one in which the members themselves, who intend not to know one another, recognise one another immediately by natural or conventional, involuntary or deliberate signs which indicate one of his congeners to the beggar in the street, in the great nobleman whose carriage door he is shutting, to the father in the suitor for his daughter’s hand, to him who has sought healing, absolution, defence, in the doctor, the priest, the barrister to whom he has had recourse; all of them obliged to protect their own secret but having their part in a secret shared with the others, which the rest of humanity does not suspect and which means that to them the most wildly improbable tales of adventure seem true, for in this romantic, anachronistic life the ambassador is a bosom friend of the felon, the prince, with a certain independence of action with which his aristocratic breeding has furnished him, and which the trembling little cit would lack, on leaving the duchess’s party goes off to confer in private with the hooligan; a reprobate part of the human whole, but an important part, suspected where it does not exist, flaunting itself, insolent and unpunished, where its existence is never guessed; numbering its adherents everywhere, among the people, in the army, in the church, in the prison, on the throne; living, in short, at least to a great extent, in a playful and perilous intimacy with the men of the other race, provoking them, playing with them by speaking of its vice as of something alien to it; a game that is rendered easy by the blindness or duplicity of the others, a game that may be kept up for years until the day of the scandal, on which these lion-tamers are devoured; until then, obliged to make a secret of their lives, to turn away their eyes from the things on which they would naturally fasten them, to fasten them upon those from which they would naturally turn away, to change the gender of many of the words in their vocabulary, a social constraint, slight in comparison with the inward constraint which their vice, or what is improperly so called, imposes upon them with regard not so much now to others as to themselves, and in such a way that to themselves it does not appear a vice.

Tuesday, February 19, 2013

Poverty and Obesity in America: How They Map

It should come as no surprise that low income and obesity are linked. They definitely are, and the linkage becomes quite clear when you pictorialize the spatial distribution of low income and obesity on maps.

Per capita income by county (click to enlarge). Dark red is poorest.

The above map shows 2008 median incomes by county within the U.S. The darkest red corresponds to an income range of $19.2K to $34.5K per year. For reference, $22K/yr is considered poverty level for a family of four and corresponds to almost exactly half the nationwide median income for all Americans. So the darkest red is not a perfect indicator of poverty, because some of the people in those areas make $17 an hour, whereas $11 an hour is the upper bound of "poverty" as defined by the Federal Government. The darkest reds simply indicate the lowest-earning counties.

Obesity prevalence by county. Red is fat.
In this map, we see 2008 age-adjusted obesity rate by county. The darkest reds represent areas where obesity is most prevalent. (Exactly why Illinois stands out so starkly, with no areas of dark red, I don't know. It suggests to me some kind of systematic error with that state's statistics. But for our purposes, the map is good enough.) Obesity, in the U.S., is defined as a Body Mass Index of 30 or higher. Bear in mind that the color gradations in this map do not have anything to do with how obese (how far overweight) people in the associated regions are. The colors are merely indications of what percentage of people in each area meet the minimum criterion for obesity.

Already we can see at a glance, from the two maps above, that low income correlates quite well with obesity. But just to make the correlation perfectly clear (visually), I've created a third map (below), which correlates the colors of the two maps above in such a way as to highlight the areas of high correlation and "blue out" areas of low correlation.

Correlation map. Areas of dark red are where very high obesity rates and lowest income levels coincide.

The correlation between obesity and low income is quite evident. Areas with the very lowest incomes have the very highest rates of obesity.

This correlation doesn't exist in less-developed countries, where as a rule the rich are fatter than the poor, for the straightforward reason that the poor are too poor to buy food. Even in China, overweight-ness correlates positively with income (not negatively, as in the U.S.) across all income levels and across all parts of the country (rural versus urban), according to Rand data.

One income-related obesity correlation that holds true across all countries studied is that obesity tends to be high where income disparity is high. (See graph below.)

Obesity tracks income disparity across countries. USA is worst for income inequality as well as obesity.

Given the relationship between income and obesity in the U.S. (and the trend toward more obesity with greater income inequality) one would expect that the recent recession would have made Americans fatter. And one would be correct. Between 2004 and 2008, obesity in the U.S. went from 31.7% of the population to 32.5%, a gain of 0.8%. But from 2008 (the beginning of the recession) to 2010, obesity went from 32.5% to 35.9%, a gain of 3.4%. [source]

The reason these statistics and trends are important is that poverty and obesity are both predictors of poor health (and ultimately poor life expectancy); and the worsening of our economy is sure to have a multiplier effect on health care costs. Obesity-associated chronic disease already accounts for 70% of U.S. health costs. Rand data show that when obesity crosses a certain threshold (namely, when your actual weight is twice your ideal weight) health costs double. Rand says that obesity rates based on averages tend to hide the true future cost of obesity-related health care, because in the future many people who are today merely obese will cross the magic BMI 40 threshold, causing health costs to increase faster than expected.

All of this is by way of saying that there's a huge cost in lives and dollars to misguided economic policy. If we allow unemployment to go up and/or wages to go down and/or income disparity to increase, our already out-of-control medical costs will zoom even faster than expected. The best, most reliable way to lower health care costs is to reduce unemployment, reduce income disparity, and increase people's wages. Anything short of that is merely treating the symptoms.

Monday, February 18, 2013

Antidepressants, Tap Water, and Autism

Fathead minnows and contaminated tap water
provide clues to the rise in autism.
Now that U.S. doctors are treating 27 million people a year for depression and writing well over 250 million prescriptions per year for antidepressants, we shouldn't be surprised to find psychoactive drugs showing up in our drinking water. The number of scientific papers on this subject has been increasing steadily since 2004. Some of the more-frequently-cited ones are listed further below.

It's a fact that no drug is 100% absorbed by the body. Regardless of which medication you take, a sizable percentage of the drug passes unmodified (completely unmetabolized) out your urine, and if you drink municipal water, guess what's in your water even after it's been "purified"?

In 2008, the Associated Press took it upon itself to investigate the situation. Here's what they found: In watersheds of 35 major cities surveyed by the AP, widely used pharmaceuticals (antibiotics, sedatives, SSRIs, mood stabilizers, hypertension meds, hormones for menopause, OTC medications, others) were at detectable levels in 28. (You might want to consult this article and check the cities listed in the column on the left side of the page to see which pharmaceuticals might be in your city's water.) 

A particularly interesting study that warrants comment is "Psychoactive Pharmaceuticals Induce Fish Gene Expression Profiles Associated with Human Idiopathic Autism," published June 6, 2012. The idea of this study is that since there is a known link between antidepressant consumption during pregnancy and development of ASD (aka Autism Spectrum Disorders), the authors of the study thought it might be a good idea to see if the bipolar meds most commonly found in drinking water could affect the expression of genes involved in neuorological disorders. They prepared dilute solutions of Tegretol, Effexor, and Prozac (in concentrations replicating those found in municipal water supplies) and exposed fathead minnows to the water for 18 days. Then they did gene-expression analysis of the minnows' brains for multiple classes of genes involved in multiple neurologic disorders. (Yes, fish have some of the same brain genes we do.) According to the authors of the paper:
"[W]e examined gene expression patterns of fathead minnows treated with a mixture of three psychoactive pharmaceuticals (fluoxetine, venlafaxine & carbamazepine) in dosages intended to be similar to the highest observed conservative estimates of environmental concentrations. We conducted microarray experiments examining brain tissue of fish exposed to individual pharmaceuticals and a mixture of all three. We used gene-class analysis to test for enrichment of gene sets involved with ten human neurological disorders. Only sets associated with idiopathic autism were unambiguously enriched." [Emphasis added]
We're not talking about one or two autism genes here. We're talking about hundreds of different genes that are differentially expressed in autism.

This is actually some of the strongest evidence yet that psychiatric medications are connected with the sharp rise in autism that began in the 1980s. Prozac was introduced in 1988, and we all know what happened then. Depression went from a relatively rare condition (affecting well under 100,000 people in the 1950s) to an epidemic disease affecting 27 million Americans. Meanwhile autism took off like a rocket.

Obviously, more work will need to be done to determine the degree to which psych meds are behind the increase in autism. Until then, you might want to sit back, relax, and have a glass of water. It might just soothe your nerves.


References

Celiz MD, Tso J, Aga DS (2009) Pharmaceutical metabolites in the environment: Analytical challenges and ecological risks. Environmental Toxicology and Chemistry 28: 2473.2484 doi: 10.1897/09-173.1.

Croen LA, Grether JK, Yoshida CK, Odouli R, Hendrick V (2011) Antidepressant Use During Pregnancy and Childhood Autism Spectrum Disorders. Arch Gen Psychiatry: archgenpsychiatry.2011.2073 http://www.issues4life.org/pdfs/iariskptb034.pdf (July 4, 2011)

Jjemba PK (2006) Excretion and ecotoxicity of pharmaceutical and personal care products in the environment. Ecotoxicology and Environmental Safety 63: 113.130 doi: 10.1016/j.ecoenv.2004.11.011.

Madureira TV, Barreiro JC, Rocha MJ, Rocha E, Cass QB (2010) Spatiotemporal distribution of pharmaceuticals in the Douro River estuary (Portugal). Science of the Total Environment 48: 5513.5520 doi: 10.1016/j.scitotenv.2010.07.069.

Mennigen JA, Sassine J, Trudeau VL, Moon TW (2010) Waterborne fluoxetine disrupts feeding and energy metabolism in the goldfish Carassius auratus. Aquatic Toxicology 100: 128.137 doi: 10.1016/j.aquatox.2010.07.022.

Metcalfe CD, Chu SG, Judt C, Li HX, Oakes KD (2010) Antidepressants and their metabolites in municipal wastewater, and downstream exposure in an urban watershed. Environmental Toxicology and Chemistry 29: 79.89 doi: 10.1002/etc.27.

Santos L, Araujo AN, Fachini A, Pena A, Delerue-Matos C (2010) Ecotoxicological aspects related to the presence of pharmaceuticals in the aquatic environment. Journal of Hazardous Materials 175: 45.95 doi: 10.1016/j.jhazmat.2009.10.100.

Saturday, February 16, 2013

Off the Wall Novel-Writing Prompts

Your assignment: Write a full-length novel using one of the following prompts.

A portal to another reality is discovered in a Mayan temple. Two scientists decide to enter the portal. One comes back.

A modern-day Viking community is found living in a remote section of Newfoundland. They're hiding a strange treasure.

Fight Club for Women, gone horribly awry.

A 90-year-old white man living in Marrakesh claims to be Jack Kerouac. And is.

A modern-day Thor Heyerdahl decides to sail solo from Baja California to Japan on a raft with no radios nor any modern electronic aids whatsoever. During the trip, he notices there are no airline-jet-contrails in the sky. When he manages to reach Japan, he finds the island completely devoid of people. All buildings are standing, nothing damaged.

A black female Army soldier with severe PTSD returns from Afghanistan to find that her white-Anglo car-salesman husband is actually an unemployed Latino alcoholic whom she doesn't recognize.

A homeless man on the streets of New York City claims to be 900 years old, the last surviving member of a race of Nephilim.

Two Kuwaiti billionaires, Khalid Amin and Mostafa al-Walid, build a faithful recreation of the Roman Colosseum in Macau and start selling tickets to real-world fight-to-the-death gladiatorial events. An International Suicide Society recruits members to fight in the events (or die passively). The International Court of Justice in the Hague sends envoys, accompanied by U.N. peacemaking troops, to arrest the Colosseum's owners. The envoys (along with the U.N. detachment) are put to death in the arena. An ex-Navy-Seal private eye from Harlem must join forces with a Muslim cleric (a boyhood friend of the villains) to bring the death tycoons to justice before they can open new theatres of slaughter in Dubai and Yemen.

A clever molecular geneticist transplants the genes for opium production from Afghan poppies to French sunflowers. The sunflower seeds become popular worldwide. Sunflower cultivation becomes illegal in country after country. Flowers are impounded, gardeners put in jail. But there's no stopping the Yellow Scourge.

In the future, penal systems are overburdened to the point where inmates for all but capital crimes are allowed to go free, except that they must wear a surgically implanted transponder and a color-coded Kevlar wrist-manacle: orange for first-time violent offender, red for multiple-conviction violent offender, yellow for non-violent felons, green for white-collar crimes, etc. The manacle can't easily be removed, but if it is, the subdermal transponder will signal the authorities. Google Crime will show you a real-time map of where criminal elements are, at any given moment. The aggregation patterns on Google Crime are mutating in mysterious ways. It's obvious that "something's up."

An irregularly shaped, completely opaque/non-reflective/cold metal object the size of a house is (accidentally) discovered orbiting the sun in the path of a comet. An unmanned satellite is sent to investigate. The orbiting object is a 100-million-year-old space probe sent by another civilization to study earth, except it has long since stopped operating. There are two alien occupants inside, both dead for millions of years. Or are they?

Friday, February 15, 2013

Do Lung Cancer Warning Labels Cause Cancer?

Smoking in Japan: few warnings, little cancer.
A few days ago, I wrote a piece called "Lung Cancer and the Power of Suggestion," in which I put forward the seemingly ludicrous hypothesis that cancer warnings on cigarets might actually be to blame for the mysterious and alarming rise (over the last five decades) in lung cancer rates among American smokers. It sounds pretty absurd until you start looking at the epidemiological evidence. (Or as my mother would say, it's all fun and games until somebody loses a lung.)

Mainly what you need to know is that while U.S. lung cancer rates increased in lockstep with per capita cigaret consumption from the early 1900s to the early 1960s, everything changed after 1964. Starting that year, U.S. per capita cigaret consumption began a long steady decline (lasting until the present day). Heart disease has also been on a long downtrend (since the 1950s, again lasting until the present day). But contrary to expectation, lung cancer rates shot up after the Surgeon General's report.

Lung cancer among U.S. men seems to have plateaued since 1995 (at an astonishingly high 72 cases per year per 100,000 population). But for U.S. women the curve may only now be peaking. And in most other Western countries, lung cancer among women is still going up.

Lung cancer rates among U.S. smokers are now so high, overall, that they can no longer be explained by science. (Cancer rates for non-smokers have stayed the same.) The mathematical models that once accurately predicted lung cancer rates based on smoking behavior have broken down completely. I reviewed some of the science behind this in my previous post. Basically, epidemiologists no longer know what to make of the situation. Some have suggested that cigarets are inherently more toxic now due to design changes and/or changes in the way people smoke. Those suggestions are speculative, however, and in general they're not borne out by the facts. We know that adding filters to cigarets decreased the toxicity of cigarets by almost fifty percent, and we know that menthols are thirty to forty percent less cancer-causing than non-mentholated cigarets. The argument that people are somehow sucking ten times more poison out of a 1.2-gram cigaret than they did 60 years ago (before low-tar cigarets, before the wide popularity of menthols) is a bit silly. There's only so much poison in a 1.2-gram cigaret. You can suck on it as hard as you want. It's still 1.2 grams' worth of "stuff."

After writing my original post on lung cancer and the power of suggestion, it occurred to me that if warning labels really did have anything to do with rising lung cancer rates (by their ability to program smokers into getting lung cancer through power of suggestion) then a good test for that hypothesis would be to find a large population of smokers who've been able to buy cigarets without brainwashing (without warning labels and without a constant barrage of anti-smoking messages) over the past few decades, and see what their cancer rates are like.

It turns out Japan is just such a population. Smoking rates in Japan have long been among the highest in the Westernized world. But lung cancer rates in Japan are nothing like what they are in the U.S. (If you start Googling around you'll see this is a well-known anomaly.)

And guess what? Japanese cigarets come with no cancer warnings.

A 2001 study of U.S. and Japanese smokers (Stellman et al., Cancer Epidemiol Biomarkers Prev November 2001 10; 1193) found:
The risk of lung cancer in the United States study population was at least 10 times higher than in Japanese despite the higher percentage of smokers among the Japanese. [emphasis added]
The major difference between smoking in Japan and smoking in the U.S.? Warning labels on Japanese cigarets are small, and their content laughably weak.

Japanese cigarets come with two warnings. The warnings say:
未成年者の喫煙は禁じられています。 Smoking by minors is prohibited. 

あなたの健康を損なうおそれがありますので、吸いすぎに注意しましょう。There is a risk to your health, I would be careful of too much.
Cancer is never mentioned.

Researchers have looked at whether there's some kind of genetic factor (some ethnicity-related resistance to lung cancer) happening here. There's not. Japanese who move to the U.S. acquire U.S.-like cancer rates.

Researchers have also looked at American cigarets to see how they differ from Japanese cigarets. The latter tend to use charcoal filters more often, and U.S. tobacco (as processed for cigaret use) is known to have more nitrates to promote faster burning. These factors can explain about 40% of the toxicity difference between Japanese and American cigarets -- far short of what's needed to explain the 10-fold difference in lung cancer risks.

So does the Japanese experience prove the hypothesis that warning labels themselves carry a risk of cancer? Of course not. For all I know, it proves that eating more sushi protects you against lung cancer.

Here's what we know for sure. Smoking research is at a crisis point. The ultra-high lung cancer rate for U.S. smokers is in dire need of a convincing explanation. The explanations that have been put forth so far are too weak by an order of magnitude.

I think a canny scientist, having reviewed all the facts surrounding the current situation with regard to lung cancer and smoking in the U.S., would say: "This smells like a scientific breakthrough waiting to happen. It's a genuine riddle, awaiting a genuinely convincing answer. When the answer comes, it'll be big."


Thursday, February 14, 2013

A First Lesson in Programming

Yesterday I talked about teaching yourself programming. I said it's something anybody who understands "if/then" can do; you don't have to be a math whiz or a major-bigtime geek to learn to read and write code. I also said that today I'd present a first programming lesson. So let's get started.

a = 1;

What does this mean to you? If you're not already a programmer, it probably means "a equals one." But in the wonderful world of JavaScript, that's not what it means. It means "assign the numeric value of 1 to a (and from this point forward, treat any appearance of 'a' as if it were 1)."

In JavaScript (and Java and C++), "=" is the assignment operator. It doesn't mean "equals."

How then can you say "equals"? Consider this:

a == 1;

This is a perfectly legal (syntactically correct) JavaScript statement. Legal but useless. It means "a equals one." It's a useless statement in that it does nothing to 'a' and changes nothing in the state of the computer. It's the programming equivalent of neon gas; inert.

When would you want to use "a == 1"? Consider this statement:

if (a == 1)
   doWhatever( );


Notice that the top line is not a statement by itself. The semicolon comes at the end of the second line. Therefore the whole statement reads: "If the value of a is equal to 1, execute the function named doWhatever." (A function is just what you think it is: a named collection of statements that occurs elsewhere.) If a isn't equal to one, just skip the doWhatever() and do nothing.

Make sense so far? Good. In that case, it's time for a pop quiz. What's wrong with the following piece of code?

if (a = 1)
   doWhatever( );


Technically, there is nothing wrong with the syntax of this statement. It will execute without error. But it's not a good piece of code. Why? Consider what it says. It says "give the variable a the value 1, and if that value is true, execute the function doWhatever." In other words, "a = 1" sets a to one (whether that's what you intended or not). The "if" asks whether the value one is true, which it is, in the world of code. (Seemingly useless fact: In the land of JavaScript, any non-zero/non-null value will always be considered true.) Thus the top line of this statement will always be true and doWhatever() will always be called. You might as well leave out the top line and just call doWhatever(). Except, that's probably not what you wanted to do, because if it was, you would have written the code that way to begin with.

If that made any kind of sense, congratulate yourself. You've done your first bit of debugging.

Was any of it hard? Was any of it "rocket surgery"?

Let's recap. Here's what you learned:

1. A piece of code contains statements.
2. A statement ends with a semicolon.
3. You can have variables with names like 'a'.
4. The equals sign is actually an assignment operator.
5. But two equals-signs in a row means "equals."
6. The "if" keyword does what you think it does.
7. In JavaScript, a non-zero value is treated as true in the context of an "if."
8. There are things called functions, which are basically just named collections of statements.
9. Code can be buggy without containing illegal syntax! It can be syntactically correct, yet logically flawed. And the flaw can be hard to spot.

That's a huge amount to learn in one lesson. But it really wasn't that hard, right?

I hope this lesson gives you encouragement to continue on. Where should you go from here? I recommend that you start by reading more about JavaScript's data types. Then perhaps check out http://www.codecademy.com/learn for free structured online courses (on your choice of Ruby, JavaScript, or Python). If it starts to sound tedious, remember there's a lot of rote and tedium in the early stages of learning any language (whether it's French, Hebrew, Ruby, JavaScript, etc.), and you're bound to start to feel like you're doing a lot of wax on, wax off, at some point. But also remember: Like the karate kid, you'll eventually break through. And yes, the payoff is worth it.

Wednesday, February 13, 2013

How to Learn to Code (a Guide for Non-Geeks)

The other day I was reading an article in The Atlantic called "How I Failed, Failed, and Finally Succeeded at Learning How to Code." It tells how the author, an ordinary mortal named James Somers, decided he wanted to learn how to code, so he bought a huge tome called Beginning Visual C++ and just started reading it. But he quickly became overwhelmed and quit.

After several false starts, Somers finally made it over the hump. He credits his success with trying to solve some of the problems at Project Euler. He credits his early failure with having had poor teachers (in the form of bad books; his term).
Learning to program: a lot of wax on, wax off.

Many people have had the Somers experience, I'm sure. Some don't even get that far, of course.

There are plenty of people in this world who wish they could read and write code but who haven't pursued coding, based on fears that it will take too much time to learn; be too hard; require too much self-discipline; require too much understanding of math; or whatever. Some no doubt think formal classroom training is their best and only learning modality. "I should have taken a programming class in college."

Here's what I would tell somebody who wishes he or she could code but doesn't know if it'll be worth the effort (or even possible).

First, I've met scores, maybe hundreds, of talented programmers in my life, and many of the very best ones were self-taught. All were mere mortals. Sure, some were superstar genius types who could have taught themselves Sanskrit if they wanted. Most were not.

So: You don't have to be a genius to learn how to code. And you don't need formal training. You can teach yourself. It does help (a lot) to have a mentor to turn to. But even that part is optional.

Does it require math skills? No, not really. Somers, in his Atlantic article, leaves the impression that it helps if you're naturally interested in math puzzles. I strongly disagree. Programming is about logic, not math. The main requirement is that you understand the basic concept of if this, then this; if not this, then that.

So really, the question is: What's the best way to teach yourself?

First, choose a language. To me, the choice is brain-dead obvious: JavaScript. Where programming is concerned, it's the lingua franca of the Web. Sure, if you specifically want to become a systems programmer or a Linux geek, you might want to consider C++ or Java, or even start with Linux shell scripts. But the neat thing about JavaScript is, the basic syntax is the same as for C, C++, or Java. Once you've learned JavaScript, you have transferable skills.

Another good thing about JavaScript is that it's powerful, but not powerful enough to blow up your machine. It has designed-in safeguards. You're never going to overwrite the boot sector of a hard drive with JavaScript, nor overwrite screen memory, nor take down the operating system. That's important, because one all-too-legitimate fear many would-be programmers have is that they'll unintentionally do something that leads to loss of data or ends in horrific non-undoable mayhem of some sort. JavaScript is comparatively tame in this respect. Bjarne Stroustrup once said that while the C language makes it easy to shoot yourself in the foot, "C++ makes it harder [to do so], but when you do, it blows your whole leg off." He might just as well have been comparing JavaScript (rather than C) to all other languages (rather than C++).

Programmers love to argue (to death) the merits of one language over another. But the simple truth is that all programming languages do the same things. They all have conditional statements (if/else), they all let you use variables, they all have a construct for doing loops, they all have a way of dealing with strings (text), they all have some notion of subroutines (or "functions" or "methods" or "procedures"). So in a sense, it doesn't really matter which language you choose to learn first.

How should you teach yourself? You'll find no shortage of resources online, but if you're like me, you probably relish the opportunity to cozy up with a reprocessed tree (a plain old book). You can buy a Dummies-style book or just go ahead and grab Learning JavaScript by Shelley Powers and start reading. You should also purchase Flanagan's classic JavaScript: The Definitive Guide (O'Reilly) as a reference; you'll eventually want it, trust me.  

Here's what to expect. You won't be writing much code for a while. You need to take in a lot of preliminaries first. In fact you'll take in so many preliminaries that eventually you'll start to feel like you're not getting it. You'll have a tremendous number of seemingly useless facts in your head about data types, how to declare variables, "execution scope," and so on, and you'll fear that none of it seems to be tying together; it's just hurting your brain, and you have nothing to show for it.

When you get to that point, you've reached what I call the WOWO Point. Wax On Wax Off. You saw The Karate Kid, right? Remember all the "motions" the kid had to go through to learn karate? Mr. Miyagi had the kid doing repetitive, menial drudgework like painting his fence, sanding his floor, and waxing his car, for what seemed like forever, and for what seemed like no purpose at all. The kid was just about to lose patience and give up on Miyagi, when he suddenly has the critical breakthrough. He suddenly has the muscle memory, mental discipline, and reflex skills to begin doing karate for real.

This is what happens also with programming. You do a lot of "wax on wax off," feeling like you're learning a lot of gibberish for nothing, and then suddenly you put the elements together and bang! it begins to make sense. At that point, you realize you're free to create, not just absorb. It's kind of like learning any language: You start with rote memorization of vocabulary and syntax rules. You acquire a lot of useless junk in your head. But once you've internalized the rules of the language (and some vocabulary words) you realize you're now free to construct sentences on your own, not just use and reuse sentences you learned by rote out of a textbook. Something similar to that happens when learning a programming language.

Tomorrow I want to continue on this theme by presenting a first programming lesson (in JavaScript), for raw beginners, a lesson for total non-geeks who are curious about learning to code. The purpose will be to prove to you that programming is not "rocket surgery."  Anyone can do it. Tomorrow you'll see!