Map of life expectancy at birth from Global Education Project.

Thursday, May 30, 2013

Patient Centered Outcomes

Okay then, even if we don't know exactly what a disease is, as people with bodies and minds, we know what we want, right? Whether or not my inability to grow hair on top of my head counts as a disease would seem irrelevant. Either I care a about it a lot, a little, or not at all. If I care about it a lot, I want a "cure." Ditto with all the bodily afflictions of aging or the spiritual afflictions of being a sentient social being. Who cares what you call it if you can fix it, right?

Wellll . . . it's not quite that simple either. to take the most straightforward case, there's a pill I can take that supposedly will reverse, or at least retard, baldness. But it might reduce my sex drive and there's a possibility, we aren't really sure, that it could increase the risk of developing a more virulent, clinically significant form of prostate cancer. Do I want to take it?

Ever since the Thalidomide disaster, we've required that drug manufacturers show evidence of safety and effectiveness before they are allowed to market their potions. Leaving aside, for now, the important question of the strength and credibility of the evidence they are required to submit, before you can define effectiveness you need to define what they are being effective against (or for, however you want to look at it). And that means you need to (drum roll please) name a disease and specify measurable indicators of its presence, severity, or symptomatology. And then you need a nosology of adverse effects. And you need statistical methods to relate use of the purported remedy to these outcomes, good or bad.

So we can't get away from it. We need to classify and name and measure. But who does this? And why should we agree with them?

Is a science of medicine possible that avoids questions of personal values and philosophy of the good? Why no, it isn't. A science of human biology might be possible without a moral dimension,* but medicine, no, because that is the fundamental difference between medicine and biology. So if your doctor claims to have the one true scientific answer to some question about your health or well being, no, she's mistaken.

* Though I doubt humans would be capable of practicing such a science. We can do it with fruit flies, but not ourselves.

Tuesday, May 28, 2013

A major semantic problem

I think at some point in the winding trail of bread crumbs I've been leaving here lately I've mentioned that we need to talk about the concept of "disease." Here's your basic dictionary definition:

a disordered or incorrectly functioning organ, part, structure, or system of the body resulting from the effect of genetic or developmental errors, infection, poisons, nutritional deficiency or imbalance, toxicity, or unfavorable environmental factors; illness; sickness; ailment.
 It goes on to give some more metaphorical meanings, which is interesting BTW since "depravity" is among them. Anyhow . . .

You may already have concluded that this definition isn't very, well, definitive. The list of causes isn't helpful since, between genetic or developmental errors and unfavorable environmental factors we have exactly everything that can possibly happen to us. And "disordered or incorrectly functioning" just begs the question. What is order or correct functioning? There is actually another problem with this definition. As weakly as it seems to rule in or out, it actually fails to include many diseases that we do recognize, because when we name something a disease, we can't necessarily point to a specific organ, part, structure or system which is disordered or incorrectly functioning in some way that we understand. So it's not just useless, it's wrong.

For what it's worth, here's what I think we mean when talk about disease, in a medical sense.

The first category is fairly clear cut. These are situations that fulfill the definition, in which we can pretty much agree intuitively that one of our parts or processes is not functioning correctly. We don't like it when we have constant pain, or can't do something that most people can do, or die otherwise than in our sleep at age 85. If we can confidently attribute the cause to a known physical property of our body and its functioning, we can name a disease and not get much of an argument. For example, if a bacterium is eating our lungs, we can name that, and hopefully if we take the right antibiotic we can end up being cured. We had pneumonia, or TB, now we don't. Easy.

But we start to have a problem with that comparator what "most people can do." Three problems actually. The first is the threshold of "most." How uncommon or far from the norm do you have to be before you merit a disease label? For example, what used to be called mental retardation, now more politely called cognitive disability or limitation, is defined completely arbitrarily, by a measured IQ of 70 or less. Give the same person an IQ test tomorrow, it might be 75. And what's the difference between 69 and 71? There's a difference alright: if it's 69, they can't execute you, but you can get special ed.

The second is what abilities really matter. I can't grow hair on the top of my head. Is that a disease?

The third is that we are all, every one of us, born with a hereditary, incurable, inevitably fatal condition that over time robs us of our physical and mental capacities. After age 45 or so, you will need reading glasses.  Your ability to hear high frequencies will decline. Your joints will start to ache. You will lose lean muscle mass.  I don't want to continue with all this depressing stuff, you can add to this list as you like, but the point is, where do you draw the line between having diseases and the human condition of mortality? In every one of these cases, we know quite well what physical processes are responsible, but are they "incorrect" or "disordered"? I'm not sure, but doctors certainly will treat all of these conditions.

Then we have situations in which we aren't presently experiencing any misery whatsoever but doctors say we have a disease because something about us puts us at risk of misery in the future: type 2 diabetes, hypertension, hypercholesterolemia, that sort of thing. Again, these are usually defined with some arbitrary threshold on a test of some sort.

And of course as I've discussed earlier we have diseases which are labels for clusters of symptoms, often largely consisting of self-reported experiences, for which no specifically disordered or incorrectly functioning organ, system or process is known. This is particularly characteristic of psychiatric "disorders" but there are some in other fields of medicine. For most of these we also have the earlier problems of locating the threshold of diagnosable abnormality and distinguishing the inevitable pain of existence from something that needs to be cut out of us by professional intervention. To the problem of psychiatric diagnosis we also have to add that some of them are labels for the way other people feel about the patient, and are not in fact distressing to the patient, e.g. narcissistic personality disorder.

So, before we can even do these clinical trials, we need to have definitions and labels for diseases, and for the amelioration thereof. But to what extent is that a scientific question, and to what extent a moral or cultural quandary? Doctor-think is pretty much exclusively done in disease categories. It may be helpful, even necessary, but it can also be limiting, and it can be used to sell us pills and other stuff we might be better off without.  

PS: in case our friend Ana is reading, I'll be in Basel August 19-22 for the international environmental health conference. Let me know if you can do lunch.

Monday, May 27, 2013

The triumph of public health

As a commenter notes, one hypothesis for the decline in violent crime in the past two decades is the removal of lead from gasoline. This was done because of evidence linking lead exposure in infancy and early childhood to reduced IQ and poor school performance. That lead exposure could lower impulse control and social integration was not so widely recognized, but the idea that this was an additional unanticipated benefit is plausible. In fact, Herbert Needleman, the researcher credited with discovering the connection between lead and reduced IQ in the 1970s, in 2002 found an association between adjudicated delinquency and lead exposure

Another important fact about Needleman is that he was subjected to an intense assault on his competence, motives and integrity by the lead industry, which recruited mercenary scientists to criticize his work and launched a massive public relations and lobbying campaign to block restrictions on the use of lead. Heard anything like that before? Oh yeah, tobacco, automobile safety, climate change, pesticides . . . This is what capitalists do when science says their products harm people. They spend whatever money it takes to lie to us so they can keep on profiting from murder for as long as possible.

But, in many of these cases, they have ultimately lost. Not only have we greatly reduced childhood lead poisoning, thanks to Ralph Nader and other activists, motor vehicle travel is much safer; everyone now accepts the harmful effects of tobacco and rates of smoking are down considerably; consumer products in general are safer than before -- children's clothing is less flammable, baby cribs and toys are safer, pharmaceutical regulation is far from perfect but it's much better than we had before Thalidomide. I could go on and on -- the bottom line is we're safer and we're living longer and staying healthier because of effective public health approaches to many dangers.

But few people, and almost no politicians, are talking about a similar approach to firearms. The approaches that are talked about -- banning certain styles of rifles and universal background checks on gun purchasers -- are as feckless as they are unlikely to happen. Most gun accidents, suicides and assaults are not done with so-called assault weapons, but with handguns. In any case, banning "assault weapons" is impossible. The AR-15 is not a particular weapon, it's a kind of kit. Various components essentially snap on to a central unit, called the lower receiver. Once you have one of those you can buy whatever pieces you want, which are not in themselves firearms and can be freely manufactured, bought and sold, and make your own dream rifle. What's more, you can buy a nearly finished lowfer receiver that just needs a few holes drilled in it, and the unfinished piece is also not considered a firearm and not regulated.

The country is saturated with firearms and it's pointless to even think about somehow reversing that situation. There is no particular reason not to have universal background checks but it won't do much good either. It won't stop the daily displays of idiocy by gun owners, the suicides, or even much crime -- many guns used by criminals are stolen. But, a public health approach to gun safety could work.

First of all, we can do what we do with motor vehicles: register guns and license their operators. Every motor vehicle has a unique identification code, in several places, which is difficult to remove. It corresponds to a record of the registered owner of the vehicle. If a car is stolen, it's very difficult to sell, and if the police find it, they can trace it. One would think gun owners would be in favor of that, which would protect their property. In order to operate a motor vehicle, you need to go through training and pass a competency test. Your license is revocable for cause.

Nobody thinks this is oppressive, or that the government ultimately wants to confiscate all of our cars. On the contrary, it enhances our liberty. I would not feel free to drive on the public roads if I didn't know that it was reasonably safe to do so because unsafe vehicles and irresponsible or incompetent drivers are, to the extent possible, barred. My liberty to go to the grocery store or the movie theater, or just to sit unmolested in my own home, similarly depends on knowing that idiotic, irresponsible or antisocial people aren't going to shoot me.

Licensing means knowing that just because the magazine is detached doesn't mean there isn't a round in the chamber. It means having a gun safe and storing weapons where four-year-olds won't start playing with them. No, the cops won't go into your house to check but if something bad does happen because you were irresponsible, you will lose your license. What's wrong with that? Registration means weapons have to be equipped with safeties. It's even possible to make a weapon that won't fire unless the bearer is carrying a RFID device, in other words only you can fire your own gun unless you give permission. There are lots of possible approaches to gun safety, that won't limit anybody's liberty or ability to use guns as they wish in legal and safe ways, but we aren't allowed to talk about them.

*I was at one time very involved in lead poisoning control, which by that time was largely limited to the problem of paint in older housing. We haven't eliminated childhood lead poisoning yet, but we have greatly reduced it.

Saturday, May 25, 2013

Some good news?

Actually there's quite a bit, if you step back from the media circus. By a circuitous route, I came across this, for example. Rape is notoriously under-reported to the police, but the Bureau of Justice Statistics does an annual survey of 40,000 households and 75,000 people called the National Crime Victimization Survey. (Yes, I know, that's social science and therefore un-Christian and a threat to our freedoms. No doubt the Republicans will put a stop to it soon.) It turns out that the "of completed or attempted rape or sexual assault against females from 1995 to 2010" fell by a lot -- from 5/1,000 females 12 and over to 2.1/1,000.

I don't know why -- people have various hypotheses about this. But violent crime in the U.S. in general has fallen a lot in the past couple of decades. Think about it.

Wednesday, May 22, 2013

A strange, sad story

This happened in my old neighborhood in Boston. This guy, a former Massachusetts state representative who graduated from UMass Amherst and went on to study at the London School of Economics (no word in the story on a degree), was busted after having 480 grams of crystal meth mailed to him at the middle school where he was working as a tutor. He's about my age.

How or why you go on from being Chair of the House Committee on Ethics, Chair of the Education Committee, and Chair of the Taxation Committee, to becoming a meth dealer in your 50s, I don't know. But it gives me occasion to think on the trajectory of people's lives. I've been very fortunate -- my career has continued to be nothing but up, in the terms that matter to me, even as I come almost within sniffing distance of the age when many people retire. (I have no such intention.) But what's happening right now to a whole lot of folks is just the opposite.

This editorial in Bloomberg News should shock us all. More than 4 million Americans who are still looking for work have been out of work more 6 months or more. Many more people -- it's hard to find out how many -- aren't counted because they have simply given up looking for work. And once you lose your grip on the job market, it's very hard to get back in -- employers actually discriminate against long-term unemployed people in hiring.

We hear countless stories about people who have worked all their lives, managed to carve out a decent middle class standard of living, and then just fell right off the rails. They're in their 50s, they can't get a job in their field, and it's just too late to start over. People have lost their homes, sucked out their retirement savings, and now they're looking at a bleak old age. There are millions of these people.

The political leadership doesn't seem to care. The only way to tighten up the job market and give these folks a chance is for the federal government to adopt a stimulative fiscal policy. In other words, spend money to rebuild the national physical and human infrastructure and put people back to work. We know damn well this is what we need to do, and that in fact it would reduce the federal budget deficit in the long term because we a healthy and growing economy will mean more tax revenues and less expenditures on the social safety net. But as Eduardo Porter laments at the linked essay, we're doing the exact opposite because we've been taken over by ideologues who have no connection to economic reality. (Porter goes off the rails himself by saying we need a grand bargain to fix the crisis in Social Security and Medicare by raising the retirement age and restricting benefits. Bullshit. All we need to do about Social Security is eliminate the cap on income subject to the SS tax. Problem solved. As for Medicare, reforming how we pay for services and rationalizing our health care system will do the job, but nobody is talking seriously about that. But I digress.)

The cruelty and fundamental irresponsibility of our political leadership is appalling. I'm not saying y'all should go out and start dealing meth, and in fact I don't know what happened to Doran. But his story did get me to thinking . . .

Tuesday, May 21, 2013

Cross of Gold

That would be the Randomized Controlled Trial (RCT), the "gold standard" of evidence for the effectiveness of medical interventions. ("Intervention" is the general term for anything doctors do, be it pills, surgery, recommendations to exercise, shaking a rattle and chanting the name of a benevolent spirit, you name it.)

Ideally, it works like this.

You must specify several conditions ahead of time:

a) Who is eligible to be a subject of the trial. If the intervention is intended to be curative, presumably they must meet certain diagnostic criteria for actually having disease X. You might want to restrict the trial to people in a certain age range. For example you might exclude children for such reasons as their inability to give informed consent and their differential biology from adults, or you might exclude very old people or people with significant co-morbidities because they are unlikely to respond as well and would attenuate any signal you might get. Often you exclude people who don't speak English because you only speak English. And so on.

b) Exactly what will happen to the people in each arm of the trial. This includes not only precisely what intervention, or sham intervention, they will get, but what they will be told, what kind of efforts will be made to insure they will adhere to the protocol (e.g., actually take the pills on schedule), how often they will come in to be studied, whether any effort will be made to restrict anything that might happen to them that could mess up the results (e.g. they get some other intervention outside of the study), you name it.

c) How people will be recruited and enrolled, how they will be tracked, what efforts will be made to retain them in the study.

d) The end points you are hypothesizing. For example, significantly more people in the active intervention arm will meet some criteria for not having the disease 6 months after initiating the treatment; or symptoms will be reduced by some amount according to a carefully specified measure. If you think there will be a difference in response between males and females, old folks and young, people with and without any other characteristic, you must specify in advance. You must also specify what possible adverse events you will test for or assess.

e) The number you will enroll in each arm of the study, how they will be assigned, and how both the subjects and the people involved in the investigation will be blinded as to what treatment each person is getting.

f) The "statistical power" of your study. This means that if there is a real effect of a given size -- something hypothesized to be realistic --  what percentage of the time will a study "detect' the effect with a p value < .05. This is really important and I'm pretty sure most people don't get it.

.So let me try to explain. Almost always, there is a certain amount of random variation in response. Some people just get better on their own. Some people are less responsive to a treatment than are others. Some people, in spite of meeting the diagnostic criteria, didn't actually have the thing in the first place. Whatever. The whole point of randomizing the subjects is that you hope these unmeasured factors will be evenly distributed between the two groups, but in case they aren't, you can use probabilistic reasoning to figure out the probability than an observed effect was just do to chance, versus being real. You need that randomness to compute a p value.

So, we set an arbitrary standard of 5%. If the observed effect would happen fewer than 1 time out of 20 even if there really is no difference between the groups -- the treatment is ineffective -- we call the effect significant. But an effect that is not statistically significant is not the same thing as no effect. A p value of .06 means the thing probably does too work, but you aren't allowed to make that claim. Why? No particular reason.That's just how we do it.

So what can go wrong? Plenty as you might imagine. More anon.

Monday, May 20, 2013

Remember Iraq?

Hardly anyone in the U.S. seems to remember that we blew a trillion dollars to eliminate the existential threat of Saddam's Weapons of Mass Destruction™, and bring the blessings of freedom and democracy to the Iraqi people, which would then miraculously metastasize throughout the Greater Middle East™ and bring about everlasting peace.

I spent much of the time whilst we were blowing that dough along with more than 4,000 American lives and, oh yeah, a few hundred thousand or a million Iraqis but who's counting, following events there very closely, as a contributor to Today in Iraq. (Now Today in Afghanistan, see the sidebar.) Actually, Americans pretty much forgot all about Iraq around 2007 or so, even though the last (officially acknowledged) U.S. troops didn't leave till 2011. So let's remember for at least a few seconds, okay?

I don't need to remind you that the Weapons of Mass Destruction™ didn't exist. Perhaps you do need to be reminded that even if they had existed, they would not actually have been nearly as massively destructive as the weapons the U.S. used in Iraq; chemical weapons and anthrax are highly overrated. But I digress. How's that democracy thing coming?

You probably won't have any idea if you rely on the U.S. corporate media for your information, but al Jazeera is reporting that the country is on the brink of renewed civil war; the alternative being that it break apart before that happens. At least 77 people are so far reported dead in sectarian violence today, and 200 injured. If you want some background on this you can read the Irish Times, where David Hirst explains the pretty basic history. The U.S. invasion ended up replacing a Sunni Arab minority regime with a Shiite Arab majority regime. (Kurdistan actually was already quasi-independent, and remains so.) Sunni Arabs have no political influence or rights, they don't get basic government services, and their leaders are being persecuted. So they are rebelling.

This was basically inevitable. The U.S. political leadership and corporate media had no understanding of Iraq when they launched the war, and couldn't be bothered listening to anybody who did. Democracy does not ride into town on the barrel of tank or a one-ton bomb. We would do well to remember this as president McCain and the Sunday yammerers try to taunt the administration into blundering into the Syrian conflict. The situation in Iraq, and Syria, is very bad already and in grave peril of getting worse and spreading further. That is true. It does not follow that "we" must fix it. We can't, and if we tried it would be for all the wrong reasons, i.e. to try to install a regime that would be friendly to our perceived interests, mostly having to do with insuring that Israel remains completely unaccountable to international law and the basic norms of civilized behavior. That absolutely will not happen.

We can join the international community in trying to ameliorate the worst of the consequences of the conflict, but you know darn well the U.S. isn't going to spend serious money taking care of Arab refugees or getting humanitarian aid into a combat zone. Not when we aren't even willing to feed our own people. So it's very tragic and sad. But the people involved are going to have to work it out, and president McCain needs to shut the hell up. For once.

Friday, May 17, 2013

I'll retire to Bedlam . . .

As I have mentioned now and again, I am afflicted with a lengthy commute, during which I tend to OD on National Pubic Radio. (Did I commit a typo?) Lately it's been absolutely unendurable -- nothing but an endless stream of ridiculous bullshit about how ordinary imperfect operations of government are the worst thing since Hitler or something. Meanwhile, stuff is happening in the world that you know, actually matters, but we obviously don't need to know about it.

Sure, as Ezra Klein lays out very clearly, perhaps with a bit too much restraint, it's all about nothing, so he expects it just to go away. Unfortunately, it is completely irrelevant whether any of this crap is meaningful, has anything to do with president Obama, or is even wrong. If the Republicans keep talking about it, and the corporate media keeps channeling everything they say and Cokie and Mara keep yammering on about how the Obama presidency has now officially failed, well then -- that will be the reality.

There's nothing we can do about it.

Thursday, May 16, 2013

Science and Evidence

This may not be the most entertaining post ever, but it's necessary in order to get on with our story. Clumsy exposition, if you will.

Many people make a distinction between science based medicine, and evidence based medicine. They're closely related, to be sure, but not quite the same.

Science depends on evidence, and respects evidence. But it does consist only of evidence. It includes deductions from evidence; hypotheses -- conjectures to be tested; and theories, which are explanations about the causal relationships among phenomena and the unobserved structures that underlie observations.

I'm sure most readers already know that the word "theory" is widely misunderstood, as being synonymous with "hypothesis." It is sometimes casually used in that way, by people who should know better, but I have been trying to discipline myself not to do that. Theories can be conjectural -- some of them also have the status of hypothesis -- but they aren't necessarily. Some of them are very well tested and as certainly true as anything can be, subject to refinement. Often a broader, more embracing theory will swallow up an old one, without exactly falsifying it. For example, Newtonian gravity still works well enough for many applications, but it does break down where conditions are extreme or we need extraordinary precision.

Anyway . . .

There are empirical remedies, that seem to work even though we don't know why. Often, alas, they don't work very well, or they don't work with everybody who seems to have the indication, or the balance of good and bad effects is not what we would like it to be. Psychiatric medications are, at best, in this category. People with disabling psychoses generally calm down and have reduced delusions and hallucinations if they take anti-psychotics, but nobody knows why. Randomized controlled trials provide evidence for effectiveness -- along with a lot of terrible side effects -- but there isn't any real scientific understanding of psychosis.

On the other hand, we now have a good understanding of how, say aspirin works. For millennia willow bark was an empirical remedy, then acetylsalycilic acid was isolated in the 19th Century, then we figured out -- or rather John Robert Vane did, in 1972 -- that it inhibits the synthesis of cell-signaling molecules called prostaglandins and thromboxanes. The former accounts for the anti-inflammatory and analgesic effects, the latter for the anticoagulation effect. (I think -- I'm not a real doctor.) Anyway, knowing that we can figure out a whole lot more about aspirin's good and bad effects, and try to find drugs that have more of the good ones and less of the bad ones. (We've made some serious mistakes along the way with that, but that's another story.)

Philosophically, this distinction is very important because the strength of new evidence depends not only on the inherent properties of an observation, such as the design of the experiment that produced it, but also on its prior plausibility. The famous p value is almost universally misunderstood. If we do an experiment and get a p value below .05 for a result which is a priori highly implausible, we cannot conclude that the chance the observation is true is 95%. It just isn't. It's likely just a fluke. On the other hand if we do a trial and get a p value of .2 or .3 for a highly plausible result, the hypothesis is very likely still true - in fact, we should be more confident that it is true than we were before, even though our observation is officially called "statistically insignificant." This misleads many people into thinking that the study undermined the hypothesis, when it did no such thing.

A very good example is the Oregon Medicaid experiment. In fact, enrolling in Medicaid almost certainly does ultimately have beneficial biological outcomes for people with diabetes and high blood pressure. Contrary to general interpretations, and in fact to its own authors' stated conclusions, the study did not provide evidence to the contrary.

I'll try to explain further as I go on to discuss evidence.

Tuesday, May 14, 2013

Science is Hard

Yes it is. Or it certainly can be. Back in Flexner's time and right through mid-Century, obviously, even though we didn't have any high quality randomized trials going on, doctors were doing stuff. Some of it was probably helpful much of the time. For example, they knew to amputate severely injured limbs, especially if there were signs of putrescence. If there's an accessible tumor, cutting it out can be helpful. It it isn't malignant, it's curative. Digitalis was used for heart disease since the 18th Century, and it is indeed helpful. There were other so-called empirical remedies back then as well, by which we mean remedies that appear to work but we don't know why.

Digitalis has survived as a useful treatment, but a lot of what doctors have done routinely for many years has not. In the 1946 National Formulary of the American Pharmaceutical Association, pills containing mercurous chloride were listed as treatment for "biliousness," a condition thought to be caused by insufficient flow of bile and characterized by constipation, headache, and general malaise. Mercury was thought to stimulate the liver; it did definitely counteract constipation, to put it mildly. Of course it is actually poisonous and long-term use of this compound was deleterious indeed.

So why did doctors believe in ineffective or even dangerous remedies? (It wasn't long before this time that they had given up bloodletting.) There are a few reasons.

The most basic is that most conditions that cause discomfort or suffering either get better on their own in a while, or fluctuate in severity. People are most likely to consult doctors when they have symptoms. Whatever nostrums or mumbo jumbo the doctor provides will then likely get credit for the patient shortly feeling better. This is how superstitions generally get started.

Furthermore, similar symptoms may have multiple causes. Even if half the people don't get better after consuming mercury, the treatment will end up getting credit for those who do. It might even really help some people, but harm twice as many. Nevertheless, thanks to confirmation bias, those who believe in it will continue to use it and be persuaded by their observations that it is sometimes effective. (Those it helps + those who get better regardless all redound to its credit; it is presumed unconnected to the harms it causes, because we have no such expectation.)

Another reason is that people just tend to like it when doctors do something, anything. The so-called placebo effect is greatly misunderstood and over-hyped, so I'll steer clear of the term for now. Let's just say that confirmation bias, and perhaps other psychological mechanisms,  mean that if people expect to feel better, they will say they feel better and perhaps, in some sense, will feel better. "Feeling better" is, after all , a purely subjective state. I could have exactly the same physical symptoms but be less troubled by them. And our experience of pain is very much affected by how much attention we pay to it. Whatever signals are coming from the peripheral nerves, we may have very different degrees of caring about them. A doctor's kindly ministrations and our presumption that we're going to feel better could be all it takes to make it so, for a while -- even if the cancer is still spreading.

All of the above, in addition to inflicting the practice of licensed, scientifically trained physicians, is of course the foundation of all forms of quackery.

In extreme cases, what we call anecdotal evidence can be quite valid. As a classic example, no-one says we need a randomized controlled trial of parachutes. Everybody knows what will happen, pretty much inevitably, if a person falls from a height of 2 miles. That people usually do it safely using a parachute is all we need to know. The curative power of insulin for people with Type 1 diabetes falls in this category, as does lemon juice for scurvy. Dr. Lind would not actually have needed his various active controls to prove the point. But these cases are rare.

Next time, a bit on the difference between the concepts of science-based medicine and evidence-based medicine.

Monday, May 13, 2013

The Fog of Science

As you may recall, in our last episode, Abraham Flexner has persuaded the world -- or at least the space between the North Atlantic and the North Pacific -- to put medicine on a scientific basis. But, it turns out that is very easy to say and very hard to do.

Back in 1910, people knew more about human biology than they did in 1850 or 500 BC, to be sure. But the usefulness of that knowledge for making or keeping people healthy -- whatever that means, and remember we still haven't figured that out -- was very limited. To take stock briefly of our relevant knowledge at the time, we knew something about pathogenic microbes and the importance of sterilizing surgical instruments and wounds. We didn't have any antibiotics, however. There were some empirical remedies, such as opioid analgesics, and, well, that's about it. We didn't know anything abut endocrinology, genetics, the immune system, neurology, oncology, you name it. You could be doing laboratory research and dissecting cadavers and peering at cells under a microscope but none of it was doing your patients any good.

It so happens that in 1747, a British ship's surgeon named James Lind decided, more or less at random, to feed various stuff to soldiers suffering from scurvy. Two of them got a quart of cider every day, two others got vinegar, two got "elixir of vitriol," which is sulfuric acid; two got sea water; and two got oranges and lemons. You know what happened. However, Lind did not want to recommend that the Royal Navy give sailors oranges and lemons because they were too expensive. It took 50 years before the navy got around to it.

Anyway, as impressive as that was, it wasn't until 1943, nearly 200 years later, that anybody got around to doing another randomized controlled trial. It was a pretty good one, even by modern standards: double blind, although not truly randomized. It was done in the UK, to test the effectiveness of penicillin for the common cold. And it was negative, i.e. it didn't work. Here's the even worse news: to this day, prescriptions for antibiotics continue to be written for people with common upper respiratory tract viral infections.

From then on we continued to see more and more clinical trials, of varying quality; and we came up with more and more categories of effective treatment for problems other than infections susceptible to antibiotics. However, the intrusion of knowledge and evidence into medical practice was gradual and almost as often counterproductive as it was beneficial. There are many reasons for this which continue to vex all of us who work in medicine and related fields, and which incite volcanoes of debate and recrimination. I'll tackle the issues in upcoming posts.

Friday, May 10, 2013

Wingnuttery kills

Among the sexually transmitted infections, Human Papilloma Virus (HPV, to its friends) is among the least glamorous. Everyone knows syphilis and gonorrhea, but for some reason HPV doesn't share their celebrity. It should, because some strains of it cause a very common and highly unpleasant problem, genital warts -- or warts wherever people's parts happen to interact, and you can use your imagination. Other strains cause cancer -- cervical, genital, anal, oral and pharyngeal. In fact, HPV is basically the cause of cervical cancer.

So it doesn't take a sodomite to see that a vaccine which is highly effective in preventing transmission of HPV would be a good thing for humanity. Or so one might think. Texas Governor Rick Perry found out the hard way that this isn't so by doing the right thing for what may well be the one and only time in his term in office, and mandating that adolescent girls get the vaccine. All the lovers of Jesus  in Texas immediately raised a massive outcry because they knew that the only reason their daughters weren't having sex with the entire football team was fear of genital warts. Michelle Bachmann figured she had a knockout punch in a Republican primary debate in 2011 when she raised the issue, and said after the debate "There’s a woman who came up crying to me tonight after the debate. She said her daughter was given that vaccine. She told me her daughter suffered mental retardation as a result. There are very dangerous consequences." Sarah Palin weighed in with some cheer leading.

Hoo boy. It turns out that in Australia, the people are not insane. They've been vaccinating girls since 2007, and guess what? The diagnosis of genital warts in women and girls under 21 went down from 11.5% to .85%. It's too soon to say what will happen to cancer, but presumably in a decade or two we'll see that going way down as well. We have not, however, heard of an epidemic of sexual promiscuity in the land of the  wallaby and the billabong.

So let's be clear. Religion is bad for your brain, and your body, at least if you make it a guide to any sort of decision.  We've eradicated smallpox and we've almost done with the guinea worm and polio -- but religion has turned out to be the main obstacle to finishing the job with polio, in this case Islamic leaders claiming the polio vaccine is a Christian plot to sterilize muslims. HPV is potentially eradicable as well. But first we have to eradicate the ravings of idiots.

Thursday, May 09, 2013

Okay, back at it . . .

Pardon the interruption. The radical discontinuity in 1910 was the famous Flexner report. Abraham Flexner, who worked for the Carnegie Foundation for the Advancement of Teaching, was commissioned to study medical education in the U.S. and Canada. Back then there were 155 medical schools in the former British possessions, all of which he visited. (He is often said to have studied medical education in North America, but, err, Mexico. I digress.)

It turned out that most of them were not affiliated with universities, but were owned by one or a few physicians. They had what Flexner considered insufficient curricula and clinical training. States generally did not regulate the practice of medicine or have licensing requirements for physicians. Most important, in Flexner's view, medical training and practice was not uniformly based on science. His ideals were the few university-affiliated medical schools of the time, and particularly Johns Hopkins. Flexner's recommendations led to the current model of medical education based at universities, followed by clinical apprenticeship and university-affiliated hospitals,  taught by clinicians who were also research scientists, based on claims for effectiveness based on scientific knowledge and reasoning. Less directly, his work led to the imposition of standards for medical licensing and practice. These were imposed by the states piecemeal, and I have not come across a comprehensive history, but by now we take it for granted that every state does this.

Following this revolution, the number of medical schools in the U.S. at first shrank dramatically, and as it rebounded, all of the new ones adhered to the new standards and philosophy. For better or for worse, medical school faculty came to be evaluated based on their research activities, rather than their teaching. Various heterodox "schools" of medicine, such as homeopathy and chiropractic, lost their claim to legitimacy within the new structure of scientific medicine, because their claims are biologically implausible and not supported by rigorous experiments. (Although, inexplicably, at this late date, they seem to be worming their way back in. But that's for another day.)

Medicine's claims of scientific authority were certainly vindicated by many important developments throughout the 20th Century, notably effective antibiotics, insulin for Type 1 diabetes, incremental advances in surgery and trauma care that ultimately added up to huge benefits, effective immunization against more and more pathogens. As recently as 20 years ago, when I first got into this racket, there was a legitimate argument about whether the contribution of scientific medicine to health and longevity at the population level was very important, or even provably positive; but that is no longer true.

But, history has not ended. Medical practice, and the physician and patient roles and their relationships, remain deeply problematic.

To be continued.

Tuesday, May 07, 2013

Bloggers are human too

I'm afraid I can't say anything intelligent today because I'm feeling like the lowest piece of crap in the Delta quadrant of the galaxy. At least this gives me a chance to comment on the whole disease ontology thing. I can't claim to be enjoying the highest attainable state of social and psychological well-being right now, and I'm sure a psychiatrist would find something to diagnose me with, but no, i don't have a disease. I suffer from the human condition.

I don't think a robot could console me right now, and I'm not one who benefits from comfort food or shopping sprees. I'll just have to carry on. So shall you.

Monday, May 06, 2013

Yo, Robot!

We interrupt this long-form essay to report on my afternoon at our Second Annual Symposium on Human-Robot Interaction. Really. I was there because I study human-human interaction and I've been roped in -- well alright, I didn't really mind, it's kind of interesting -- to letting computer scientists play with my concepts, and they might be useful for getting machines to communicate with us more usefully.

I won't go into that in a lot of depth here, but what I do want to talk about is where the nerds think this whole thing is headed. You may or may not like it. One of the potential applications for interacting robots is to be companions and caregivers for elderly people. This actually gets talked about a lot. The social problem is that more and more people are living to be old and frail and widowed and socially isolated. It's too expensive to give them homemakers and home health aids plus they're lonely. So maybe we can give them a robot.

I don't know about you but I find that fairly icky. Of course, if you could make such a robot, it could also be a house servant for able-bodied families, a janitor, a waiter -- lots of jobs. Even, yes, a nanny, and they were talking about robots being essentially Head Start teachers as well. Is this good?

Where I come in technically is that basically, Siri works, kinda, because all you do is ask her -- excuse me it -- questions and maybe give some basic instructions in a limited domain, such as calling a number. But your robot companion has to accurately interpret much more complex domains of speech, what we call the full range of illocutionary acts -- such as all the various kinds of questions, promises, and expressions of feeling, even jokes; figure out your intentions, desires, and state of mind; and respond appropriately. Note that I didn't say the robot has to understand anything -- that's different. In fact, what we've learned from decades of failure at artificial intelligence is that we have much more success getting computers to respond appropriately to language inputs if we forget about understanding and just automate the responses based on statistical correlations of language content with illocutions.

Fortunately, we are so far from this that I'm not worried about it happening any time soon. I think. But if we do give robots more and more autonomy and behavioral flexibility, then we have to start worrying about robot ethics. Also, does tossing people a robot as a substitute for human companionship or nurture mean we are meeting a social need, or consigning people to a kind of hell?

Friday, May 03, 2013

health and medicine, continued

(In case you haven't picked up on it yet, I have embarked upon a long-form essay. It will continue.)

So what is “medical” attention? It is well known but seldom seen as remarkable that most societies known to history and anthropology, even small scale ones with limited hierarchy and division of labor, have cultural roles for specialists in healing people. In societies large enough to support full-time specialists, as far as I know there is always a full-time healing profession. In some times and places these people have also been more generalist priests, with additional assigned powers, and priests can always try to get you some divine intercession, but usually there is a secular healer role as well, or more than one. There are some systems in which shamans can heal or sicken, curse your enemies, make it rain, make your object of desire fall for you, or whatever. There’s certainly variety. But in Europe and its metastasis to North America, since classical antiquity, physicians and priests have been distinct, as they are now generally around the globe.

One reason I find this remarkable is that for most of history, almost everywhere in the world, these people couldn’t actually do much, if any, good, in most cases. They may have had some useful skills – to set broken bones, maybe to cut out or saw off rotting parts, perhaps out of their formulary of dozens or hundreds of concoctions a few were truly beneficial. But as we now know, most of what they did was at best useless, but often harmful, they best-known example being bloodletting. But it’s perhaps less widely recognized that, lacking any concept of pathogenesis, surgeons and obstetricians were probably the world’s leading source of infection, and thereby managed to kill innumerable patients and birthing women.

The scientific revolution that upended cosmology and physics starting in the 16th Century (Copernicus died in 1543, Newton in 1727) didn’t really get going in biology until the 19th Century, and even then it did not at first have a great deal to offer to medicine. Darwin obviously caused quite the brouhaha, but his theory was not immediately relevant to medical practice. Ignaz Semmelweis figured out the importance of hygienic practices, such as physicians washing their hands and instruments between patients, around 1850. But he didn’t have any scientific explanation for his observations, and he was generally scorned. Once Pasteur figured out about a decade later that microbes can cause disease, we were getting somewhere; surgery and childbirth became more hygienic by the end of the century, and Pasteur’s work also led to the development of vaccines in addition to the long-available cowpox inoculation against small pox. (That was based on empirical observation, with no explanatory theory.)

So, by the beginning of the 20th Century, medicine was doing less harm than before, but still couldn’t do much good. Effective treatments for the vast majority of human ills still did not exist. Just about anybody could open a medical school and confer a medical degree, and just about anybody did. Most of these schools were owned by one or two doctors, existed to make a profit, didn’t teach much science, if any, and had low requirements for entrance and degrees. There were many competing systems of thought about the nature and causes of ill health, almost all of them completely bunk, some of them unfortunately still with us, such as homeopathy.

Then a radical discontinuity occurred in 1910.

Next: The Flexner Report and the Dreams of Reason

Wednesday, May 01, 2013

What is health? (continued)

A synonym for the medical enterprise in the English speaking world is “healthcare,” which you will note has now become one word. (It was still two words when I was a child, and for a while I corrected my students’ papers if they made it one.) So medicine – the social institution led, at least until recently, by people possessing the credential Doctor of Medicine – is purportedly dedicated to caring for our health.

When people visit physicians, they usually do so voluntarily. Presumably, they do this because they want the physician to make them healthier, or keep them healthy. What exactly does that mean? What are they seeking?

This question appears simple. We use the word health all the time. Most people don’t reflect on its meaning any more than they reflect on the meaning of “breakfast” or “basketball.” They answer the question at the top of this post with little thought. It’s obvious, right? Health is . . . .

Actually that’s a very tough question. The preamble to the constitution of the World Health Organization, written in 1946, used this definition: “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” Not only that, but “the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being.” The second quote is chiseled into the fa├žade of the main building of the Harvard School of Public Health. That is definitely uplifting.

It is also completely nonsensical. Start with the idea of “complete . . . well-being.” Do we really want to say that we’re unhealthy if there is anything we wish for that we do not have? And even if we can come up with a more realistic definition of complete well-being, is there any point in proclaiming that every human being has a right to the highest attainable standard of whatever it is? If we do endorse such a right, it’s not just “one of the fundamental rights,” it’s the only one, because there wouldn’t be anything left over.

We must begin by accepting the human condition. We are all of us born with an incurable, inevitably progressive disease which, beginning in our third decade, gradually degrades our physical and mental capacities and is ultimately fatal. We are, in other words, mortal, and we grow old. What is more, our initial endowments differ. If a congenital condition deprives us of complete well-being, have we suffered a violation of our fundamental human rights? Or is there perhaps a more constructive way to look at that situation?

It doesn’t take much thought to see, further, that my well-being may conflict with yours, and that determinants of my own well-being may conflict with each other. I have the privilege of living in a beautiful place in the country, and having a very desirable job in the city. But this privilege is conferred by the internal combustion engine, which spews ultrafine particles into the atmosphere that contributes to heart and lung disease; causes crashes that kill 36,000 Americans every year and seriously injure many more; and is changing the global climate threatening mass extinctions and unimaginable human misery.

I could go on about this, even write a whole book about it. But our present purposes do not demand it. People don’t go to physicians to claim their fundamental human right to the highest attainable standard of health. They go because they have a particular complaint that they think may be amenable to medical intervention, which is sufficiently disturbing to make the trouble and possible expense worth the trouble and downsides of medical attention.

More on this anon.