No, I'm not running out of titles, that just seemed right for this one. A few days back I wrote that when I first got into this business, in the '80s, it was legitimate to argue that medical services (so-called "health care") made only a small contribution, at best, to population health and longevity; but that this is no longer true. We spend more and more on health care, but we are getting something for it. Still, is it worth it?
In the new NEJM, David Cutler, Allison Rosen and Sandeep Vijan try to answer this question. (Subscription only, here's the abstract.) They looked at gains in life expectancy from 1960 to 2000, compared with increases in medical spending. Now, life expectancy is a slightly dodgy construct. We talk about life expectancy at various ages, from birth on up, but how do we know how long an infant born today can expect to live? We can't possibly know, the calculation is made by assuming that the current death rates at various ages will apply to the infant as she or he grows up and ages. In fact those rates could change for better or for worse and the average baby born today could live a longer or shorter time than the life expectancy at birth. Whatever.
Next, they assumed that 50% of the gains in life expectancy since 1960 are due to medical intervention. That sounds as though they picked the number arbitrarily but actually they make a pretty good case that it's probably somewhere close to accurate.
So, first of all, you'll be happy to know that life expectancy at birth increased from 69.9 in 1060 to 76.87 in 2000. At least you'll be happy to know that if you're a newborn baby. Life expectancy at age 65 increased from 14.39 years to 17.86, about half as much. In other words, if today is your 65th birthday, and you're the average person, you can expect to live to be nearly 84 years old. Of course, you aren't that person. You'll probably do better if you are female, and worse if you are male, by the way. But you knew that.
Their analysis is a bit misleading in that they spend most of their time talking about the average total expenditure per person since 1960, which yields a figure of $19,900 spent per year of life gained per person. That doesn't sound like a bad deal at all.
But it's much more appropriate to look at trends over time, and over the life course. Here it gets a bit more worrisome. We spend much more on people 65 and older, obviously, and that differential has increased. The average cost per year of life gained for people 65 years old was $75,100 from 1960 to 1970, but $145,000 from 1990 to 2000. Of course, if their estimate of the percentage of gains in life expectancy that results from health care is off, the numbers go up or down accordingly.
Would you say it's worth it to spend $145,000 so you can die at 81 rather than 80? Or so somebody you love can? Maybe, but as with everything else, there's an opportunity cost. That same $145,000 could, literally, save the lives of hundreds of poor children who will die for lack of a $4 bed net to prevent malaria, or a village well to deliver clean water, or a condom so they don't become an orphan. But then again, we could get the money just as easily by ending the occupation of Iraq, on which we have already squandered $300 billion. So it's not a direct tradeoff. Medical care could also have additional benefits -- relieving pain, preventing or correcting disability -- and costs -- yup, causing pain, causing disability. Probably there is some additional net benefit there but I'm not convinced of it. Unnecessary or failed tests and procedures cause a lot of damage.
Then of course there is the issue, which I've about beaten to death here, that European countries and Canada get even better results from their health care systems, while spending half as much or less. They have seen the same percentage increases over time, but the gap remains. So even if we want to keep spending the big bucks to squeeze out another year in the rocking chair, we're still wasting half of it.
I'm not telling anybody what we ought to do, but those are some numbers on which to base your opinions.
Thursday, August 31, 2006
Stayin' Alive
Yeah yeah, I watched it
It, being of course ABC's "Last Days on Earth." Like yesterday's, I guess this post is obligatory.
First of all, ABC News gets full props for unabashedly presenting a scientific worldview. For the most part, they resisted the compulsion to balance reality with delusion, although for some reason they had to sneak in a preacher to mumble mumbo jumbo about biblical prophecy during the discussion of the Yellowstone supervolcano. This is particularly bizarre since the event will in no way resemble any biblical prophecy.
Other than that, they kept their skirts clean, even staying rock solid on evolution when that big rock hit the Yucutan channel and wiped out the dinosaurs.* They get a big gold star for putting cleanliness above godliness in the punch line segment on global warming. They stomped on global warming deniers like they were cockroaches, sparing not fossil fuel executives, James Inhofe, and even their own former selves. I particularly liked the comparison to tobacco executives, hitting another very ugly bird in passing with one asteroid impact. Then when they brought in President Gore as the hero of the story they truly started down the path of penance for a multitude of sins.
I can hear the wingnuts howling already, from way over here. I'm sure corporate headquarters will force 20/20 to do a special on global warming and evolution starring Inhofe and Anne Coulter.
So, could it have been better? Sure, I have to earn my curmudgeonly stripes or else what am I doing here? So, in some particular order:
There wasn't much science education in there after all. They frog hopped from conclusory treetop to conclusory treetop without much in the way of explanation. What exactly is a gamma ray burst again? For that matter, what's a gamma ray? How does that greenhouse effect work? If you were disinclined to believe any of this, the only real persuasion was provided by authority. Everbody knows Steven Hawking is supposed to be a great genius, Neil deGrasse Tyson is a familiar figure and he seems to know what he's talkng about. . . Maybe pick five apocalyptic scenarios instead of seven and give us a little more background. Which brings us to number 2:
They didn't do a very good job of sorting out the parameters of probability and severity. A nearby gamma ray burst would indeed sterilize the planet, but this is of more philosophical than practical importance. Ditto with a wandering black hole swallowing up the earth. That such events happen from time to time in the universe and have no doubt wiped out fertile planets here and there in the ineffable vastness of time and space tells us something about the fundamental nature of reality, but they aren't worth worrying about as a practical matter. However, more or less conflating such events with supervolcanoes and a global flu pandemic, or even a bioengineered plague, tends to leave an impression that the latter are existential threats, which they are not. Which brings us to number 3:
I think they made a mistake by emphasizing the worst case global warming scenario, which happens to be exactly where the wingnuts have focused much of their attack on President Gore already. That 40 foot sea level rise might happen in a hundred years or maybe 150. There is speculation that the Greenland and West Antarctic ice sheets could collapse much sooner, but that's all it is. Given that the President is under sustained and vicious attack over this, it's probably best to stand on the firmest ground -- agricultural disasters, monster cyclones, the spread poleward of tropical diseases, water shortages, mass extinctions, etc. That's all plenty bad enough, and impossible to seriously dispute.
Okay. I'm also glad they reminded us that we still have that itty bitty problem of nuclear war to worry about. Good job, folks -- and for the record, Rudy Bednar is the executive producer. Michael Bicks is the senior producer. Maybe, just maybe, some way, somehow, more of the corporate media will start to focus on stuff that actually matters, and our civilization will have a chance. But then I remember that this is the same outfit that employs John Stoessel. Maybe there's no hope after all.
*Although they repeat the commonplace inaccuracy that the dinosaurs were made extinct. Most species of dinosaurs died out, but not all. The dinosaurs' descendants are all around us, among the most numerous and visible large animals on earth. (I say large animals to distinguish the birds from the far more numerous insects and various marine taxa.) This is a very important corrective to what I might call folk conceptions of evolution.
Wednesday, August 30, 2006
Obligatory post
I know you've already read about this, but I guess I have a moral obligation to comment. The new Current Population Survey finds that the percentage of Americans without health insurance has risen year-to-year from 15.6% to 15.9%. (That's from 2004 to 2005. I shudder to think what it is now.) The main reason this is happening is that a smaller percentage of jobs come with health insurance, and those that do offer require higher employee contributions which many low wage workers simply cannot afford.
The poverty rate held steady (which is a disgrace in a period of substantial economic growth) but that's actually bad news for the health insurance story because people whose incomes are above the official poverty line are generally ineligible for Medicaid benefits, even if they are categorically eligible (i.e., elderly, disabled, pregnant or receiving TANF, or under 18), although children up to 200% of poverty may be eligible for SCHIP if their parents or guardians can afford to make their required contribution.
What we're seeing is low unemployment, but more and more crappy jobs without health care benefits. The system of employer-provided insurance is imploding, but there seems to be no urgency to do anything about it. Why Democrats can't get it together to come up with solutions and campaign on this issue is a profound mystery.
Oh wait -- they have to raise money too. Also, they're terrified of a return visit from Harry and Louise. (For those of you who are too young to remember, that's the phony couple the insurance industry hired to scare the shit out of people about the Clinton health care reform plan.)
Back when I was a youngster, we had a civil rights movement, and an anti-war movement, and a women's movement, and a poor people's movement in this country, and they actually did succeed in changing some things. I'm perfectly willing to grow what's left of may hair long, put on a tie-dyed T-shirt, and learn the chords to some Pete Seeger songs, if that's what it takes. Let's make something happen.
Tuesday, August 29, 2006
Sounds like a good idea, huh?
West Virginia, a relatively poor state, has come up with an innovative approach to the taxpayers' burden of providing Medicaid. The state cut the basic benefits -- for example, it limits prescriptions to no more than 4 per month, and doesn't provide substance abuse treatment or mental health services.
That sounds like uncompassionate conservatism. However, people can qualify for an "enhanced" plan that doesn't have these limitations by signing a "member agreement" in which they take personal responsibility for their health. To keep reasonably comprehensive insurance, people have to keep their medical appointments, get the standard screenings, take their pills, and follow "health improvement plans," i.e. programs recommended by their doctors such as diabetes management or nutrition education. The federal government quickly approved the plan.
What's wrong with that? The state is saying it will continue to provide good medical insurance to poor people, but they have to meet the state half way and do their bit to keep themselves and their kids healthy. We all should accept personal responsibility and not expect the Nanny State to take care of us when we won't take care of ourselves, right? In NEJM, Robert Steinbrook and Gene Bishop and Amy Brodkey offer separate critiques.
I don't have much to add to what they say but I'll summarize some key points in my own words. First of all, and this one ought to be obvious even to the least compassionate conservative, one reason why people might not keep one or more of the four commitments is because they are mentally ill, or have an addiction problem. But these are exactly the people whose mental health and substance abuse treatment coverage you are going to eliminate. Like, duhh.
But you don't have to be cognitively or behaviorally impaired to fail to "comply" with treatment. Poor people have limited access to transportation, all the more so in largely rural West Virginia. One reason they might miss appointments is because the bus is late or they can't get a ride; or because their child is sick, or they can't get off of work. People in low wage jobs often don't get paid for sick time, or risk firing if they miss work. Sometimes people don't take their pills the way their doctors want them to because of poor communication between physician and patient, or because of side effects.
Another perfectly valid reason is that the patient doesn't think it's such a good idea to take the pills after all. Remember, it's a fundamental principle of medical ethics that patients are supposed to be autonomous. Accepting treatment is voluntary. Affluent people don't have to take pills if they don't want to, and they still keep their insurance. Forcing people to follow regimens their doctors want them to follow is unethical and in fact, there is no legal basis for forcing competent adults to accept medical treatment against their will in any state -- until now.
Doctors are supposed to report patients who don't comply. Will any physicians actually do that? It would constitute a major violation of medical ethics, to deliberately harm a patient in that way in order to coerce her into doing what you think is best, or simply punish her for failing to do so.
That the state and federal bureaucracies responsible for Medicaid both approved this proposal with little or no real public debate, and that the medical profession in West Virginia appears to have put up little, if any, visible public resistance, is truly shocking. (And by the way, none of doctors Steinbrook, Bishop or Bordkey practice in West Virginia.) But, in George W. Bush's America, poor people aren't really people at all.
Monday, August 28, 2006
New Business Announcement
As you probably know, The People's Republic of Massachusetts has passed legislation requiring that everybody have health insurance by next year. Part of the plan is that employers who don't provide at least a 70% subsidy for their employees have to pay $295 per worker. Not enough to do much to encourage compliance, but still an issue for the Christian Science Church, which is based in Boston. The Christian Scientists want to be allowed to offer faith healing insurance to their employees.
Mark Unger, who describes himself as a metaphysician, qualifies under the church's faith-healing insurance plan to treat patients through prayer. He said his job is ``to lift up the patient above the physical level to the spiritual, to get them to look beyond the symptoms to the spiritual truth about what's going on." Unger charges $32 for a treatment, during which he prays for a patient to promote healing. The Ashland resident said he can pray anywhere, but prefers a quiet place, usually not with the patient. ``My style of prayer is just an absolute, quiet listening to God," he said. While he doesn't make medical diagnoses, Unger says he has cured a patient's skin cancer with prayer. ``It dried up and dropped off," he said.
John Q. Adams of Boston, who said he has worked as a Christian Science faith healer full-time since 1985, described his treatments as prayers that focus on the specific needs of a patient. He said he charges $25 per treatment.
Well, I too am a Metaphysician, and I have great news for you: I will faith heal you for only $20 a session. That's what makes the free enterprise system so great: it means the consumer rules, and competition keeps prices low. Just send me an e-mail, and I'll tell you where to mail the check.
Sunday, August 27, 2006
In the Beginning
Well, no, not the beginning of everything, or even much of anything, in the context of our grand and glorious universe, but it was still a pretty important beginning as far as we are concerned, and that is the beginning of life on earth.
As you will no doubt not recall, in our last episode of this long-running series, I summarized the four essential components of life as we know it:
- A cell membrane consisting of a phospholipid bilayer. This serves to create a protected compartment in which the processes of life can take place unmolested.
- Information for making proteins encoded in DNA.
- A mechanism for making proteins using said information, built out of RNA.
- Proteins.
As I also said, once you have any chemical system that can catalyze its own replication, with the possibility of imperfect copying, Darwinian evolution can begin. By all indications, there were indeed a lot of complex organic molecules present in the primitive ocean, including all the essentials: phospholipids, amino acids (the building blocks of proteins) and nucleotides (the building blocks of DNA and RNA). (Viz the famous Stanley Miller experiment.) We know that life appeared within a very short time, by geological standards, after the bombardment of earth by debris from the formation of the solar system slowed down enough to allow the crust to cool and the oceans to form -- no more than 200 or 300 million years, and probably less.
Okay, so it must be easy to get life happening on a planet with the right conditions, no? Maybe so, but nobody has quite figured out how to do it yet. I say this with some trepidation because here is where the creationists love to pounce. "You can't figure out where life comes from, so God must have done it!" But of course, people used to attribute plagues, earthquakes, and volcanic eruptions to God, but now we have figured out why they happen, and God has nothing to do with it. (A conclusion which the faithful ought to welcome, since it would seem to salvage a good part of God's reputation, but for some reason most of them don't. Oh well.) So this is just one more thing we need to figure out.
The major difficulty is that you need to have all four of those components working together, but which came first? And how could it have come first, since they are dependent on each other? DNA can't be copied for reproduction, or read for protein synthesis, without proteins and RNA; proteins can't be manufactured without RNA, at least, and RNA doesn't get made without proteins and DNA. The cell membrane is also put together by proteins -- and, an important detail, is pierced by various proteins which act as gatekeepers, determining what gets in and out.
But maybe the cell membrane is the least problematic element. Specific control over what passed through the membrane may not have been necessary in the beginning. It's a great improvement, but could have come later. Phospholipids will form compartments spontaneously in solution. It is conceivable that a pre-biological chemical system that catalyzed the formation of lipid bilayers could have arisen spontaneously. It would tend to persist because it would be protected within its lipid bilayer bubble, but would "reproduce" when the bubbles did happen to burst due to wave action or other causes. If the fragments could go on to recruit lipids from the solution and form new bubbles for themselves, you would have a self-reproducing system and perhaps be on the road to life.
This isn't hard to imagine but nobody has come up with a working example yet. Another possibility is that life started in some other sort of compartment. One proposal concerns tiny holes in volcanic rock near hydrothermal vents in the deep ocean, and says that live may have evolved there for a while before some organism lucked onto the lipid bilayer trick and so freed itself from the rock.
Even so, you've got to figure out how the DNA-RNA-Protein complex could have arisen. DNA stores information to make proteins; proteins catalyze chemical reactions; RNA mediates between them. The problem of how this complex system could have been assembled originally seemed much more tractable after Sydney Altman and Thomas Cech discovered that RNA molecules could, in fact, catalyze chemical reactions, including excising pieces of themselves. Since RNA can store information in the same way DNA does, it appeared the problem was solved, and so-called RNA World hypothesis was born, as advocated here by Christian de Duve. Catalytic RNA molecules, called ribozymes, could have constituted the first self-replicating chemical systems. Once you have the RNA world going, it doesn't seem terribly difficult for the DNA and protein systems to develop around it, and for the repository of genetic information ultimately to shift to the more stable, and easier to replicate, double-stranded DNA. (Today, so-called retroviruses, including HIV, store genetic information as RNA, and copy it into DNA. So it's obviously possible!)
Alas, formidable difficulties remain. I'm 100 miles from being a specialist in the relevant chemistry, or any sort of chemistry, but those who know say that in the hypothesized pre-biotic environment, many factors would have conspired to terminate RNA chains before they could get up to critical size, and to destroy at least one of the essential nucleotides, among other problems.
The problem is sufficiently challenging that it has inspired many surprising theories. William Martin and Michael Russell believe that the earliest life was not based on organic chemistry at all, but on iron sulfide inside rock. And of course, most boggling to the mind is the panspermia hypothesis, which says that microbes pervade the galaxy and travel between stars as spores embedded in rock. Hence earth was seeded with life from space. This still doesn't answer the question of how life got started in the first place, of course, but it makes the range of possible places and conditions essentially unlimited. It also suggests that if people one day travel to other star systems, they will encounter DNA-based life, perhaps even prokaryotic cells a lot like our own.
You know what folks? I have no idea. And that's good, because it means there is a thrilling journey of discovery still ahead of us. And that is far more satisfying than reading and re-reading a musty old book by people who knew even less than we do.
Friday, August 25, 2006
The reification of constructs
We have been rathah amused by the frenzied flapdoodle over the de-planetification of Pluto. The journalistic profession and much of the public seem to think that this is a major scientific controversy and represents some substantive question about the nature of the universe.
Actually, it's the equivalent of a spat about what to name the baby. There is essentially no disagreement among astronomers about the nature of Pluto and the other objects known as planets. You can continue to call Pluto a planet if you want to, and the police won't knock on your door, nor will you be guilty of believing anything that isn't true. You will be using the word "planet" in a way which a scientific organization no longer chooses to use it, but what the heck, if I ask most people if a tomato is a fruit, they'll say no. The only real issue the Pluto controversy is convenience. Pluto turns out to be one of a very large class of objects orbiting the sun at a great distance. If we continue to call Pluto a planet, and we keep finding more of those objects, we'll end up with who knows how many planets, and it would be too hard to remember all their names. (Not that anybody would be obliged to do so.) That's the only issue at stake here.
A few days back I warned against confusing "is" and "ought" questions, a mistake which underlies many a ridiculous feud. Failure to recognize when an argument is purely semantic is probably an even more common cause of unnecessary strife. Unforuntately, it's also a cause of many substantive mistakes.
Take the "disease" called depression. Depression is defined clinically as whether or not a clinician decides, based on a conversation with a person, that some subset of a list of qualitative judgments about the person is applicable -- examples include fatigue, feelings of guilt, etc. For purposes of research, depression is normally defined as the score on a specific questionnaire called the Hamilton Rating Scale. One way to score points on the Hamilton scale is to deny being depressed.
Is there a "real" entity behind these definitions? In one sense, that's tautological. The entity is just that: how you answer those questions. However, psychiatrists want to be seen as "scientific" in their judgments and treatments, and drug companies want their pills to be seen as scientific treatments. Therefore they must believe that depression is something more than how people answer a series of stupid questions, that it corresponds to some biological process that can be independently observed. This compulsion has resulted in a widespread myth, that depression is a deficiency in the neurotransmitter serotonin. That hypothesis has been convincingly refuted, but as often happens (see my recent post on "two kinds of people"), the falsification has failed to "take," so strong is the need for depression to be "really real."
There is a common facile response to Popperian falsificationism. (Popperian falsificationism, Popperian falsificationism, Popperian falsificationism. I just wanted to say that.) Supposedly, if I say, "All swans are white," and you go out and find a black swan, I'll just say, "You call that a swan?" But that is too facile, I'm afraid, and utterly wrong.
If I have defined a swan as a white bird, then indeed, there cannot be a black swan, and the proposal is trivial. But that is not how biologists define species. It used to be. Taxonomists had only gross physical appearance to go on, so they could more or less arbitrarily decide that black panthers or white tigers were a different species. But now we define swans as birds which are interfertile with swans and are known to breed with other swans in nature. So, if a black one comes along who does that, not all swans are white, QED.
The point is, our thinking is constrained by how we sort things into categories. We can do that in a way which has a more or less compelling basis. These categories are constructs of the human mind, often strongly shaped by social pressures as well. That is what we mean by the social construction of reality. Reality is really real, but we humans need to put all the pieces into a finite number of buckets in order to make sense of it, and there are infinitely many ways of doing that. They are not, however, all equally defensible or useful.
The kinds of "fertile" research programs Imre Lakatos sees as definitive of science depend on fertile constructs. Whether we call Pluto a planet is trivial, it has no fundamental importance to astronomical research. But whether we call a score of 20 or more on the Hamilton Rating Scale the "disease" of depression is fundamental to how we address the problem of human happiness. That matters a lot.
Thursday, August 24, 2006
The Answer
In response to my last post, Missy wants to know what solution I recommend to the problem of so many uninsured people in the U.S.; and m, like far too many of us, is worried about a relative (her mother, specifically) who is running up large medical bills and doesn't know how she'll pay for them. Indeed, medical expenses account for about half of all bankruptcies in the U.S.
m is discouraged that Americans aren't willing to pay a bit more in taxes to cover the uninsured, but in fact, that wouldn't even be necessary. If we had a single payer system, we could spend less on health care than we do now, while covering everybody, and as a matter of fact, the cost to people who currently have insurance would be less than it is now. The proof that this is possible is not hard to find. Just walk over the bridge from Detroit to Windsor, Ontario. There it is! Universal, comprehensive, high quality health care that costs less per capita -- a lot less -- while covering everybody, and people who are healthier and live longer than we do.
I have explained how this is possible before, also some helpful background here. We squander 25% of our health care dollar on administration -- doctors figuring out how to bill multiple payers, insurance companies marketing to employers, health plans spending money to deny services to people, insurance company profits and executive salaries, etc. As a youth, I worked as a material handler on assembly lines at a razor factory. One of the lines made disposable razors. Some of them went in regular consumer packaging, others were packaged as medical supplies. Otherwise they were identical. The "medical supplies," however, sold for four times as much. (And no, they weren't sterile.) With a single payer system, all that waste would disappear. Poof! We could also have the equivalent of Britain's National Institue for Clinical Excellence, so that we could be sure of getting the best and most cost effective health care.
Whenever there's a serious attempt to achieve universal coverage in the U.S., we get the same scary rhetoric -- it's going to raise your taxes, it's socialism, the government will control everything and you won't have any choices, Cervantes is a commie pinko hippie who wants the terrorists to win.
If you call the premiums people will pay for health insurance under a single payer system "taxes," then it might mean higher taxes for some people. but the "tax" increase would be less than the amount they would save on health insurance. It would mean either socialized or perhaps closely regulated health insurance, but it wouldn't mean socialized health care. In Canada, doctors are private entrepreneurs, people have totally free choice of doctors, and that's all there is to it. And so what if it is "socialism"? We have socialized elementary and high school education in this country, socialized law enforcement and firefighting, socialized road building and maintenance, socialized parks, a socialized coast guard, socialized armed forces, socialized libraries, socialized retirement income, and socialized health insurance for elderly and disabled people. What's wrong with that? Why not have socialized health insurance for everybody, like those Communist slave societies in England, Sweden, Norway . . .
The reason people try to scare you about universal health care is because they wouldn't be able to fatten themselves any longer by slurping up that stream of waste. They're rich insurance executives who have millions to spend on lobbying and public relations, and they don't want their gravy train to stop. That's all. They're greedy pigs. And they're stealing from you.
Wednesday, August 23, 2006
An accident of history?
Still trying to work through the request pile, someone wanted commentary on the lack of universal health insurance coverage in the U.S. That lack is notable because all the other wealthy countries -- and even some unwealthy ones -- do have universal health care insurance. I've written a lot about insurance issues here, but I haven't really addressed the political history behind our peculiar non-system.
If the U.S. were not so abnormal, it wouldn't seem to require explanation in quite the same way. For people under age 65, who are not poor children or poor and disabled people, or poor people taking care of children, the vast majority depend on employer-provided insurance. 62.4% of the non-elderly population had employer sponsored insurance in 2004. For people who don't get insurance through their employers, it's quite expensive to buy, and, since they are usually in low wage jobs, they generally can't afford it. That's why we have something like 45 million uninsured people. As the price of health insurance keeps going up, employers are covering fewer workers and/or forcing employees to pay a higher share of the premiums. So, the number of uninsured people has been rising.
In the 1930s, when FDR launched his "New Deal" programs and established Social Security, there was debate about establishing a universal health care system as part of it. Probably the most important reason it didn't happen was because the American Medical Association, then one of the most powerful lobbies in Washington, was violently opposed to it. Remember that it was just around this time that medical care was actually starting to be worth having, so private insurance plans started to emerge.
During WWII, the government imposed a freeze on wages, but did not freeze benefits. Since there was a labor shortage, in order to compete for workers, employers started to offer health insurance. Harry Truman tried to establish a universal health insurance system, but by this time, the insurance industry had a stake in the status quo. So did unions, who could claim health insurance benefits as a victory they had helped deliver, and even employers, who saw it as a way of promoting worker loyalty. And, the AMA was still against universal health care. Since a substantial percentage of the population did have insurance, there was not strong pressure to fix the system. Remember -- and this is very important -- that health insurance was much cheaper then than it is now.
In 1954, the IRS decided that the premiums employers paid for workers' health insurance were not taxable income for workers. This further encouraged the growth of employer-provided insurance. Although universal health insurance seemed dead, there was an ongoing debate about providing health insurance to Social Security recipients -- the elderly and disabled. Continuing vociferous AMA opposition prevented the idea from going anywhere.
In 1960, JFK ran on a platform of providing "Medicare" to the elderly. Finally, after he was murdered, sympathy for the late president's goals, along with the legislative savvy of his succesor Lyndon Johnson and Democratic ascendancy in the Congress made passage of Medicare possible in 1965. Medicaid, for welfare recipients, also passed as a sort of afterthought. At the time, it covered a much smaller population and, again, was far less expensive, than it ultimately came to be. Medicare succeeded even though the AMA hired Ronald Reagan to make a record entitled "Ronald Reagan speaks out against socialized medicine," which was distributed to thousands of doctor's wives for use in house parties to raise opposition to Medicare. Reagan's speech concluded, "And if you don't do this [i.e., write and call your member of Congress and organize against Medicare] and if I don't do it, one of these days you and I are going to spend our sunset years telling our children, and our children's children, what it once was like in America when men were free."
Well, we've had Medicare for 40 years now, and most people don't feel enslaved by it. This signal defeat for the AMA probably helped to bring about a decline of the organization's status within the profession, which no longer generally shares the aversion to government-sponsored health care. However, the vast power of the insurance industry lobby has continued to stymie any further extension of government sponsored health insurance in the United States. The Bush administration, of course, ultimately wants to do away with Medicaid and probably Medicare as well. There is little prospect of that happening, but they are squeezing both programs. Meanwhile, the ranks of the uninsured keep growing.
Tuesday, August 22, 2006
A difficult question
By request, here is my take on the ethics of HIV vaccine trials.
The prospect of having a vaccine that would truly prevent HIV infection is truly compelling. It's worth a very large investment and a certain amount of risk. So far, although as I said a couple of days ago we've managed to slow down the spread of HIV, stopping it is nowhere in sight. It's still incurable, extracts an apalling human toll, and is in fact destroying whole societies in Africa. A vaccine eliminated another terrible scourge of humanity, smallpox, and we are tantalyzingly close to eliminating polio (although the conflict in Afghanistan has produced a serious recent setback). Other dread diseases are no longer major problems in the wealthy countries thanks to vaccines, although they continue to take an unconscionable toll in most of the world.
In order to develop a vaccine (or any medical product), you first have to test it in people in what's called a Phase I trial, with a small number of volunteers, to determine that it appears to be safe, at least for most people, in the short term. You may get some useful information about biological response at this stage, but that's not the main objective. In somewhat larger Phase II trials, you begin to get information about dose response and preliminary evidence of effectiveness, while continuing to look for adverse effects. Finally, to get approval, you need to do a large-scale Phase III trial with adequate statistical power to prove that the product is effective for the intended purpose in a defined population, with somewhat more potential to detect adverse effects.
As I have noted many times, however, Phase III trials can easily miss important adverse effects because a) they don't generally have long-term follow-up and b) they aren't necessarily looking for the effects that do occur and may be underpowered to detect less common ones. Also, while the trial population is narrowly chosen, the product is usually prescribed much more widely, so you often get bad news only after widespread commercial use has gone on for a while.
Certain conditions are considered essential to the ethical conduct of trials. The first is that participants be fully informed of the possible risks and harms from the procedure, of the selection criteria, and other conditions of the study; and that their participation be entirely uncoerced and freely chosen. Since there is no basis for expecting that participants in early stage trials will benefit, their participation is presumed to be largely altruistic, although in fact they are paid. This is supposed to compensate them for their time and inconvenience, but of course it is really an incentive. For later stages, there is supposed to be what is called clinical equipoise. That means we don't know whether the treatment is better than the control, it's about 50/50. Otherwise, it would be unethical to give it to some people and not to others. (In the real world, this is often a fiction, but that's another story.)
In the case of HIV vaccine trials, there are additional complications. One is that the test to determine that someone is HIV infected is actually an antibody test. Being "HIV seropositive" means that you have antibodies to HIV in your blood, indicating that you have been exposed to the virus. Since the immune system generally does not succeed in clearing HIV, people who have been exposed are presumed to be infected. (This is not the case with most viruses. Having antibodies to measles means you are immune, not that you are infected.)
But vaccines are intended to provoke an immune response, therefore if you receive most experimental vaccines, you will forever more test positive for HIV, whether or not you become infected. This is obviously a disadvantage, because a) it will be more difficult, and expensive, to find out if you are infected in the future -- you'll need a test for viral RNA; and b) being HIV seropositive can be a disadvantage if you want to emigrate, buy insurance, etc. You may have a lot of explaining to do.
A second complication is that people whose behavior puts them at risk for HIV are sometimes prone to rationalizing why it's really okay. Those are exactly the people you need for a Phase III or even a phase II trial, in which you are trying to prove that the vaccine actually works. In fact, you need for some of them to become infected during the course of the trial -- you need a statistically significant difference in the rate of infection between the group that gets the vaccine and the group that doesn't. So if you pick people at low risk, your trial won't succeed. But if people at high risk think they may be protected, they may be more likely to engage in unsafe behavior, which would make your trial unethical because it is harming the participants.
Ergo, the only way to do this ethically is to provide the trial participants with intensive counseling and other needed supports, such as addiction treatment, to reduce as much as possible the chance that participating in the trial will lead them to behave less safely. This applies to both the control group and the active group because, remember, at this stage we don't actually know that the vaccine is effective. But if our counseling is really effective -- and intensive counseling has been shown to be effective at reducing the risk of infection -- the trial will fail to show that the vaccine works.
While you ponder that dilemma, you should know that currently, while there are numerous HIV vaccine trials going on, these are all early stage trials. You can read about them here, from the U.S. government sponsored organization promoting these trials. None of the products currently in development is thought to have strong potential for ultimate approval -- these are really all experiments to try to learn more about how the immune system reacts to certain stimuli, to gain information that may help in the design of more effective products. (HVTN 502, the most advanced, is a Phase II study.)
And that brings us to the final problem with HIV vaccines, which is that in order to gain approval, they have to be highly effective. They can't just succeed in preventing infection in a percentage of cases, because one of the worst things you could do is give people a false sense of security. Partially effective vaccines may be useful with other pathogens, where people's individual behavior has little relevance to their risk of becoming infected, but with HIV they could not gain approval.
It has proven very difficult to design even a partially effective HIV vaccine for a couple of reasons. One is that the virus mutates quickly, and exists in various strains. A vaccine that protects against one strain may not protect against another, and a vaccine that is effective today may not be effective tomorrow. (This is the same problem we have with flu vaccines, but for the reasons just stated they are still useful, whereas a comparable HIV vaccine would not be.)
A second reason is that in HIV, the surface proteins which normally provoke an immune response against viruses are hidden by a sugar coating. One goal of the current trials is to find a strategy to defeat this defense. But even if they succeed, that isn't going to help with the first problem.
The people working on HIV vaccines believe passionately in what they are doing and they are convinced they will succeed. I'm not an expert on the technicalities, so I won't venture a guess as to the true prospects. But I will say that I am more concerned about the ethical problems with all this than most people seem to be. Indeed, I can't find a good discussion to link to, which is why I gave you my own at such length. If anyone has a resource to recommend, please do.
Monday, August 21, 2006
Give me a Break
Being as I was semi-comatose at the end of a long, hard weekend I tuned in my local TV news last night -- specifically the NBC affiliate in Boston. I'm sure you've already guessed the lead story, which took up at least 60% of the non-weather non-sports part of the broadcast. These insane clowns had actually sent a reporter to Boulder, where he proposes to remain for the duration. Of course, he had to be in Boulder to report on events which took place in Thailand, and on an airplane above the Pacific.
Like all "news" production operations, they have branded the story: "Justice for Jon Benet," in this case. Their advertising blitz for the upcoming epoch of all Jon Benet, all the time, features intrepid reporter Dan Hausle, with a bulldog expression, striding resolutely forward in slow motion and then folding his arms in the "Mister Clean" pose, to prove he's stronger than dirt -- which is a logical impossibility because you can't be stronger than yourself. Intrepid investigative reporter Hausle's story consisted of an entire inter-commercial segment dedicated to the scandal of what John Mark Karr had consumed on the airplane: champagne, chardonnay, pate, fried prawns. They featured interviews and quotes from various people who were shocked and appalled that an accused criminal would be eating prawns on an airplane. After the break, they went to a conversation among Hausle and the studio anchors swapping disgust and horror over the depravity of the FBI's failure to begin exacting vengeance prior to the indictment.
Well, here are some actual facts which it might truly be in the public interest to know. Perhaps a responsible TV news producer -- oh sorry, that's oxymoronic -- would want to take this opportunity to tell us something meaningful. Murder of a child five or under is not extremely common in the United States -- there are about 600 per year. The rate went up from 1980 till the mid-90s, and then started to come down a bit, but these trends are weak.
Victims are disproportionately Black -- around 37%, more than twice the proportion of the population which is Black -- but crime statistics, unfortunately, don't give us any further ethnic breakdown. (We need to fix that.) A parent is the perpetrator in just over half of all cases -- mothers and fathers about equally -- and a friend or acquaintance, usually male, in 25%. Murders of young children by strangers are, in fact, rare.
And, of course, most victims are not beauty pageant contestants who provide an excuse for endlessly showing video which is disturbingly close to kiddie porn on prime time TV. And most victims are not from highly affluent families who glow in the dark, although rich people aren't immune from killing their kids. Child murder is largely a sub-category of family violence. Families are the most violent institution in society, in fact. So much for family values.
Oh yeah -- I don't think Karr did it. So this whole thing is going to look really, really silly. As if it doesn't already.
Exploiting Exceptionalism
No dramatic news came out of the recently concluded International AIDS Society meeting in Toronto, but the undramatic good news is that the situation does not appear as grim as it did a few years ago. The number of HIV-infected people worldwide continues to grow, but not at the explosive rate we once feared it might. Prevention efforts in the wealthy nations have kept prevalence rates fairly steady, and in most of the poorer nations, they have begun to have a noticeable impact. The insistence of the United States government that a large share of its contribution to the global campaign against HIV be devoted to abstinence only programs means that local public health workers cannot use the money as effectively as they could, and the U.S. is still far short of fully funding its commitments. But at least we're doing something.
In the current NEJM, Jim Yong Kim and Paul Farmer review the current state of the global epidemic. They particularly emphasize antiretroviral treatment, which has now become available to far more people in Africa and other poor countries thanks to inexpensive generic versions of the drugs manufactured in China, India and Brazil. They call for expanding access to all HIV related drugs to everyone who needs them.
This emphasis may at first seem to miss the mark. These drugs are not a cure, they often have serious side effects, and while they extend life expectancy for an average of 13 years compared with no treatment, people living with HIV can still expect to get sick and die of effects of the disease eventually. Some people even fear that the availability of treatment may make some people think that acquiring HIV infection is not such a terrible thing after all and so undermine prevention efforts. Furthermore, why should we extend HIV care to everyone while not providing all of the other basic needs? I have written before about the Millenium Development Goals, which are not being met: the 11 million children under five who die avoidably every year, not from HIV but from contaminated water, malnutrition, malaria and other diseases; the half million women who die every year in childbirth; and many other terrible problems.
But the willingness of the wealthy countries to invest in HIV care, although it seems an unjustifiable exception, represents an opportunity as far as Kim and Farmer are concerned. Succeeding with antiretroviral treatment requires building or rebuilding adequate health care systems in the poor countries, stopping the brain drain of physicians and other health professionals, and meeting basic needs for transportation, nutrition, etc. If we do those things, we will have the basis for fighting all of the fundamental threats to life and well being in the poor countries. So why not ride HIV exceptionalism as far as it will take us, toward meeting all of the essential goals?
But Kim and Farmer also point to a grim development, the spread of highly drug resistant pathogens -- HIV, TB and malaria in particular -- which are feeding on the HIV epidemic but threaten all humanity. In the same issue of the journal there is an article on the growing prevalence of antibiotic resistant staph infections in the United States. And we also have growing problems with C. difficile and other nasty drug resistant bacteria.
This is happening, of course, because of that non-existent, satanically inspired mythical phenomenon of evolution. Even as we build up our capacity to battle our ancient enemies throughout the world, they are developing their own capacity to defeat our weapons. It is impossible to overstate the urgency of this matter. We'll continue to follow it here.
BTW: If you haven't already noticed, you need to hit the "refresh" button on your browser or you're likely to be seeing a two- or three-day-old version of this (or any) blog. I update faithfully every day except Saturday.
Friday, August 18, 2006
There are two kinds of people in the world . . .
Those who divide the world into two kinds of people, and those who don't. If you wish to understand the scientific worldview, you have to divide beliefs into scientific and non-scientific.
What exactly do scientists believe in?
First, let's get something very clear. There are at least two broad domains of belief, which we can call is and ought. Tremendous miscommunication, waste of oxygen, and rancor results from people getting them mixed up. Sometimes Ozzie is talking about is while Harriet is talking about ought. If they had only noticed it in time, they wouldn't be divorced. Sometimes a person notices a fact about nature -- be it human biology or the behavior of waterfowl -- and concludes that it proves how people ought to behave, or how society ought to be organized. Sometimes people reason from what they believe ought to be, to a conclusion about what is. All of these are common, and grave errors.
Got that? Because when I say that scientists don't believe in anything, that does not imply that they have no ethics, find life pointless or meaningless, or can't be passionate advocates for causes, including truth, justice and the Albanian way. I'm talking about is, not ought. (Also, the statement isn't actually true, although it perhaps it ought to be. As we shall see momentarily, scientists cling fiercely to ideas. But then again, maybe that's a good thing.)
So, with the exposition out of the way, this plot concerns how we distinguish between science and pseudoscience or unscientific beliefs. This is called, repulsively, the "demarcation problem." I have mentioned previously the ideas of intersubjectivity -- the notion that the truth is "out there," as it were, subject to observation, and that we accept observations when we share them. Cold fusion went down in flames because nobody but Pons and Fleischmann -- okay, almost nobody -- could see the same thing, not because their explanation for their observations was necessarily implausible.
But that is just the beginning of the story. New observations extend our knowledge -- yup, there's an icy body we hadn't noticed before, orbiting beyond Pluto -- but by themselves, they don't extend understanding. That takes a theory. A theory is a general model of the relationships among constituents of nature, and to be satisfying, it usually has to include one or more causal statements. (Yeah, heavenly bodies go around each other, but why?)
Newton thought that he had proved his theories from facts but of course, he was wrong, as Einstein later showed. His theory was only approximate (whatever that means), not true. And as people have thought about this since Newton's day they have realized that it is impossible to prove any theory, at least one that is sufficiently interesting. (Again, whatever that means.) There could always be another explanation. Lots of very smart people are working very hard right now to try to find a deeper explanation for gravity than Einstein's.
You may have heard about Karl Popper's idea that a theory is "scientific" if its proponents specify, in advance, observations which would prove it false. (Popper was an Austrian-born philosopher who became a British subject. He died in 1994.) Just because a theory is falsifiable doesn't mean it is true, and obviously, specifying falsifiability criteria means we think it might not be. But, once we prove our theory wrong, we can go on to find a better one -- one that accounts for all the obserations the old theory explained, plus the ones it cannot. And when that theory is falsified, we go out and get an even better one. And so on.
It is now considered the mark of a rube to think this "falsificationist" doctrine is correct. Supposedly it has been falsified. Actually, I think the critics are partly just quibbling and that their objections are not nearly as fundamental or profound as they seem to think. So I guess I'm a rube. But anyway, the arguments, which can get awfully technical, basically come down to the idea that if an observation seems to falsify a theory, it is often possible to explain it away. The measurement might have been inaccurate. The sub-theory you used to argue that the observation would falsify the main theory is actually the one that's wrong. (E.g., it's not my date for Etruscan civilization that's wrong, it's the method of Carbon-14 dating.) My theory is right, it just needs a little something extra to account for the observation. (Ptolemaic astronmers kept adding "epicycles" to explain retrograde planetary motions.)
Scientists do indeed have a very hard time giving up their precious theories. As Thomas Kuhn famously obseved in The Structure of Scientific Revolutions, observations that might be considered to falsify a widely accepted theory can pile up for a long time, and they just serve as puzzles to be solved by explaining them within the confines of the theory. Then quite dramatically, a new scientific consensus emerges: the Copernican Revolution; the immutability of the elements and the periodic table; etc. Kuhn makes this seem like a purely sociological event, not really explained by the world "out there," but by the social hierarchies and conventions of the scientific enterprise.
I say, fiddlesticks. It may have taken longer than it should, but no sane person can deny that the Ptolemaic universe has been falsified. So have the phlogiston theory of combustion, the miasma theory of disease, and, oh yeah, the theory that the world was created by divine fiat 10,000 (or less) years ago. These theories are false, and they have been proved false. There is no conceivable combination of arguments about faulty measurements, faulty experimental assumptions, or missing epicycles that can rescue them. We have sent robots to Mars. We can see microbes under the microscope. We can count the layers in the antarctic ice. Case closed. The argument that single observations don't falsify theories is also largely a quibble. If you have to add an epicycle, you've been forced to change your theory. It's just a matter of degree.
However, just because theories can indeed be falsified doesn't mean that they can be proved. (Some philosophers claim that this is illogical because by upholding falsification, I am claiming that the theory that the universe does not revolve around the earth is provable. It's true, I consider it proved, but I don't consider it a "theory" in the technical sense we are using here, since it doesn't explain anything, it just states something we don't know.)
But if we can't prove anything, and all theories are likely false, what makes a theory scientific? I am satisfied by the formulation of the mathemtician and philosopher Imre Lakatos, whose very accessible discussion I excerpt thus:
[A]ll the research programmes I admire have one characteristic in common. They all predict novel facts, facts which had been either undreamt of, or have indeed been contradicted by previous or rival programmes. . . .Halley, working in Newton's programme, calculated on the basis of observing a brief stretch of a comet's path that it would return in seventy-two year's time; he calculated to the minute when it would be seen again at a well-defined point of the sky. This was incredible. But seventy-two years later, [when both Newton and Halley were long dead,] Halley's comet returned exactly as Halley predicted. . . .Thus, in a progressive research programme, theory leads to the discovery of hitherto unknown novel facts. . . .
The hallmark of empirical progress is not trivial verifications: Popper is right that there are millions of them. It is no success for Newtonian theory that stones, when dropped, fall towards the earth, no matter how often this is repeated. But, so-called 'refutations' are not the hallmark of empirical failure, as Popper has preached, since all programmes grow in a permanent ocean of anomalies. What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.
So, what of the theory that God made the world and all the species in it? This predicts nothing. It's proponents claim that no-one can know the mind of God, so god might do anything at all, without notice, rhyme or reason. Its proponents spend all their time trying to explain away observations, in ever more desperate flailings to somehow cram observations of reality into their procrustean program of dead and degenerating ideas. Their sought after destination is human helplessness and impotence, and ignorance is inscribed on their banner.
Miracle Cure
Don't get me wrong. I get down on the drug companies (that's drug in black and white) and many of their products here a lot, but I don't mind taking a few. As a matter of fact, I'm alive to yell and scream at Merck and Pfizer because of antibiotics that they manufacture. Long-time readers know that I underwent major surgery about 15 years ago, of a kind that could not be done successfully without antibiotics. While the doctors made a mistake, and did a far more radical operation than I actually needed, I did have a condition that would have been 100% guaranteed to kill me before the antibiotic era.
If you've noticed that I've been off my game a bit that last couple of days, it's because I've been under the weather. Without going into unnecessary detail, right now I'm taking a late generation antibiotic, ciprofloxacin, and I'm very glad for it -- I should be back to full strength very quickly.
Most people assume that medicine became "scientific" and just started to get better and better after the Enlightenment, along with physics and chemistry and biology. But it isn't so. As a matter of fact, doctors didn't really begin to do more good than harm until the development of antibiotics, which didn't seriously happen until World War II. The improvements in life expectancy and health status that happened between the late 19th Century and the mid 20th resulted from public health measures -- which mostly means sanitary sewers and provision of clean drinking water, along with pasteurization, refrigeration and generally cleaner food handling -- plus improved nutritional status and better housing as the worst conditions of the Industrial Revolution began to recede.
Doctors could cut off mangled limbs and seal the stump with a red hot iron; and they could administer heavy metal toxins to syphillitics, which were bad for you but even worse for the spirochete so no doubt worth it on balance. They could give opiates for pain relief, and they gave smallpox vaccinations, but you didn't actually need a doctor for that. They could do various sorts of surgery to remove tumors and abscesses but it was as likely to kill you as cure you. Other than that, what they did was generally useless or harmful. As late as the 1970s, it was perfectly respectable to argue, as Ivan Illich did in Medical Nemesis, that medicine was, on balance, harmful to the health and well being of individuals and to the beneficial order of society.
Doctors still manage to kill 100,000 people or so every year in hospitals, which are still very dangerous places full of virulent, drug resistant pathogens among other horrors. No doubt doctors kill as many more outside of hospitals, due to adverse effects of prescription drugs. However, a bit more than 30 years after Medical Nemesis, it is no longer defensible to say that medicine does not make a substantial contribution to population health and longevity. The movement for evidence-based medicine (sad that it required a movement, but it happened), continuing advances in understanding disease etiology and the development of effective prophylaxis and treatment for heart disease, and better and better surgical techniques, have definitely tipped the balance.
Before WW II, lack of access to medical services no doubt made poor people feel disadvantaged, but they may actually have been lucky. That's not true any more. We should definitely still do our best to eat right and exercise and not smoke and all that good stuff, and try to stay out of the clutches of doctors. They have a bias toward intervention, they can still do more harm than good in many situations, and we should probably strive to be more conservative than our doctors when it comes to treatment decisions. But we are better off with them than without them. (Or, as Tom Lehrer suggested, it's no fun to be a Christian Scientist with appendicitis.)
And that is why it is now absolutely the case that universal access to comprehensive health care has now become an urgent matter of justice and a basic right on a planet that can afford to provide it. That wasn't true until recently; effective medical intervention was a hit or miss affair, and it made sense to say that feeding people and giving them clean water to drink and other basic conditions of life was so much more urgent that sending doctors was little more than a distraction and a way of denying the unconscionable reality of injustice.
In extreme situations, such as wars and mass famines, there still is a lot to be said for that position and medical relief agencies such as Doctors without Borders struggle with this dilemma. However, as a general proposition, the basic, effective medical services have now taken their place alongside other basic needs as essential components of a just world order. I have been asked to say more about universal access and the situation of the uninsured in the U.S. -- topics I've certainly addressed in the past but need to get back to, as the political lanscape is changing.
So I will. That's two thumbsuckers in a row, and as usual, you won't hear from me again until Sunday, when I owe y'all something on the origin of life. Then I'll get after it.
Thursday, August 17, 2006
Deconsdrugtion
I added a new link to the sidebar, under "resources." The Harm Reduction Coalition is a good place to start if you are concerned about addictive or drug abusing behavior in yourself or someone you know.
Oh yeah, there are drugs, and there are drugs.
drug: A chemical entity taken into or applied to the body with the approval of a physician for the purpose of treating or preventing disease. (Most can be purchased only with a legal license provided by a licensed physician, called a "prescription."); or, such an entity approved by the U.S. Food and Drug Administration to be marketed for the prevention or treatment of disease, but sold directly to the public.
Drug: A chemical entity taken into the body although it is illegal to do so. For most of the history of drugs, the user's motivation was presumed to be a desire for alteration of conscious experience produced by the drug, or avoidance of experiences that occurred in its absence. Such desired experiences could include euphoria, analgesia, disinhibition, hyperarousal, sedation, psychological or spiritual insight, hallucinations, dreams, etc. Avoided experiences included physically distressing symptoms, agitation, craving, etc. However, the definition has recently been expanded to include chemicals intended to increase muscle mass, strength and endurance for purposes of athletic competition or body building. Also, the definition has been expanded to include over-the-counter drugs which are used for purposes of altering conscious experience, when such drugs are approved for other purposes, e.g. cough syrup; and legal entities used for this purpose but not marketed for it, e.g., whipped cream propellant.Note that the same chemical entitiy can be either a drug or a drug, depending on the circumstances. A prescription medication taken without a prescription, or with a prescription obtained illegally, is a drug. Note also that entities which it is not illegal to consume, but which are not prescribed by physicians or legally marketed for the prevention or treatment of disease, but which are marketed for the purpose of altering consciousness, cannot be drugs, or drugs. Hence alcohol, coffee and tobacco, for example, are not drugs, although they are used for purposes for which drugs are typically used.
There is a complicated cultural and historical dance that determines what entities become drugs, what entities become drugs, and what entities are neither; and how the legal regime responds to these categories. Partners in this dance include financial and political interests, conventional morality, challenges to conventional morality, racism and racial/ethnic politics, cultural institutions, local custom, international relations, historic accident, and a little bit of science and reason. As we continue the discussion of drugs, which has long been a main focus here, we will also discuss drugs.
Wednesday, August 16, 2006
Why do you think they call it dope?
In the land of the bowler and bumbershoot, it is possible to have discussions that we can't have here. For example, a Parliamentary Committee has just published an inquiry on the classification of illegal drugs. If you have a high-speed Internet connection, a fast printer, and a really, really, serious interest in this issue you can download all 179 pages here.
If you are less obssessive, you can take my word for it. Here is the UK's present system of classification.
(In the U.S., penalties for most drug offenses are set by state law, so we don't exactly have a comparable system.) The committee notes that although this classification purports to be based on an overall assessment of the "harmfulness" of these drugs, in fact there are no consistent criteria or evidence base behind the classification system. They particularly note that there is no evidentiary basis for putting hallucinogenic mushrooms in Class A. They further note that it makes no sense for alcohol and tobacco to be omitted from the list, since they are in fact demonstrably more harmful than some substances which are on the list.
However, they are not calling for prohibition of alcohol and tobacco. On the contrary, they note that there is no particular evidence that the criminal penalties have a deterrent effect on the use of these drugs. They recommend uncoupling criminal penalties from the classification on the basis of harm. Although it is not the province of this report, a clear implication is that the entire question of criminal penalties for drug possession should be revisited.
Here is a key excerpt:
Decoupling penalties and the harm ranking would permit a more sophisticated and scientific approach to assessing harm, and the development of a scale which could be highly responsive to changes in the evidence base. It is beyond the scope of this inquiry to recommend an alternative approach to determining penalties but we note that possibilities could involve a greater emphasis on the link between misuse of the drug and criminal activity or make a clearer distinction between possession and supply. It should also be noted that while it is certainly possible—and desirable—to take a more evidence based approach to ranking drugs according to harm associated with their misuse . . . caution needs to be exercised in viewing the scale as ‘scientific’ when the evidence base available is so limited and, therefore, a significant part of the ranking comes down to judgement calls.
The caveats about the limitations of the evidence base notwithstanding, a more scientifically based scale of harm than the current system would undoubtedly be a valuable tool to inform policy making and education. Charles Clarke, the then Home
Secretary, pointed out that: “One of the biggest criticisms of the current classification system is that it does not illuminate debate and understanding among the young people who are affected by it”.
Lesley King-Lewis, Chief Executive of Action on Addiction, also called for “a much more rational debate” which would inform “young people in particular, of the different levels of drugs and the different and varying harms that they can do to themselves”. Sir Michael Rawlins, ACMD Chair, agreed, saying: “Where I think we are all at fault, not just the ACMD but all of us are at fault, is not being better at explaining to young people particularly the dangers of drugs”.
Professor Nutt, Chair of the ACMD Technical Committee, argued that a more scientifically based scale of harm would be of value in this situation: “in education the message has to be evidence based. If it is not evidence based, the people you are talking to say it is rubbish”. The Runciman report also noted that “The evidence that we have collected on public attitudes shows that the public sees the health-related dangers of drugs as much more of a deterrent to use than their illegality”, emphasising the importance of conveying health risks and harms as clearly and accurately as possible. It is vital that the Government’s approach to drugs education is evidence based. A more scientifically based scale of harm would have greater credibility than the current system where the placing of drugs in particular categories is ultimately a political decision.
Amen to that, and it all goes triple for the U.S., where our drug control policy is absolutely insane. The point of this post is just to introduce the idea that it is actually possible to have a rational discussion of this subject. Whether that will ever happen in the land of the puritan and the home of the moral crusader remains to be seen.
Truth . . .
Surprisingly hard to define. I'm going to give those who are interested in exploring the question "What is Truth?" (which is, I suppose, the Last Common Ancestor of all questions) some links. These are not for the faint-hearted!
A review of how philosophers (i.e., highly trained, professional bullshit artists, and I mean that in a good way) have addressed the question of truth -- but how are we to assess the truth value of statements about truth? From The Internet Encylopedia of Philosophy. Will probably make your head hurt and may not leave you feeling enlightened, but at least it shows that this ain't easy.
An essay on Critical Thinking (pdf) by Peter Facione. Critical thinking is the father of knowledge.
The Skeptic's Dictionary. Yup, "Creationism" is in there.
Back soon with some more mundane matters.
Addendum: Best book review of the calendar quarter.
Tuesday, August 15, 2006
New Rule
No using other people's handles or identities, even if you aren't actually trying to fool anyone and it's just to make a point. I think it's confusing, has the potential for mischief, and it detracts from sensible discussion.
It's fine to use a pseudonym, obviously, but please be consistent and use the same one all the time, and don't use anyone else's, including mine. Take responsibility for what you write.
Future violators will be banned. That is all.
And when I die,and when I'm gone . . .
According to the Sutras, Sidhartha Gautama said that it was pointless to speculate about first causes, by which he meant such questions as why does the world exist. He also said that people believe in various gods and concepts of god but he would not get involved in those debates.
As a philosopher, I certainly don't put myself in the same league as Buddha, but I do try to follow his example here. It seems that people imagine I have written all sorts of things that I have not. I have not, for example, specifically addressed the existence of God. I have said that the cosmos is more than 13 billion years old, and unimaginably vast; that the earth is more than 4 1/2 billion years old, and life on earth more than 3 1/2 billion years old; and that the species which are here today arose by evolution, descending from a common ancestor, a single cell. I don't think I have actually said it, but I probably have implied, at least, that it is not necessary to invoke "intelligent design" or any supernatural force to explain the development of complex forms from simpler ones by evolution. Finally, I have said that if you want to invoke God as the explanation for something, you need to define what you mean by God. That doesn't seem too much to ask.
For some reason, there are people who think that all of this means that my life must be without meaning, that I am incapable of love or experiencing love, that my existence is empty and pointless, that I am without ethics, that I have no spiritual life. Let me put you all at ease. That is assuredly not the case, any of it. Somebody also called me an idiot, which I am pretty sure is not accurate. (And seems a rather un-Christian thing to say, no?)
The reason I insist on these conclusions is simply because, if we are to make good decisions about health and health care, whether in general, or for ourselves and our loved ones, we need to have accurate information. That includes accurate information about human biology, and about the biosphere of which we are a part. Evolution is extremely important because it is the explanation for our biology and for the biosphere, and because it is going on right now, all around us.
One example of why this is important, the one that got me started on the whole thing, is that the pathogens that attack us, which have plagued human existence from time immemorial, are evolving, and they do it very fast. They are threatening to escape from the drugs we use to control them, which will be very bad news for us. To understand how to prevent this, you have to understand evolution. But that's not the only reason why evolution is important, and I'm planning to go onto other very important subjects, from diabetes to cancer.
So I discuss these matters because I believe in the truth. I believe in what's real. Unfortunately, according to poll results reported in Science magazine and noted today in the New York Times, the percentage of people who agree with the statement "Human beings, as we know them, developed from earlier species of animals," is lower in the United States than in any European country except Turkey.
Yup, Americans disagree with the Christians, but agree with the Muslims. 85% of Finns agree, but only about half of Americans. That is profoundly sad. In fact, it is shameful.
In the same issue of the Times, Lawrence M. Krauss writes:
But perhaps more worrisome than a political movement against science is plain old ignorance. The people determining the curriculum of our children in many states remain scientifically illiterate. And Kansas is a good case in point.
The chairman of the school board, Dr. Steve Abrams, a veterinarian, is not merely a strict creationist. He has openly stated that he believes that God created the universe 6,500 years ago, although he was quoted in The New York Times this month as saying that his personal faith “doesn’t have anything to do with science.” “I can separate them,” he continued, adding, “My personal views of Scripture have no room in the science classroom.”
A key concern should not be whether Dr. Abrams’s religious views have a place in the classroom, but rather how someone whose religious views require a denial of essentially all modern scientific knowledge can be chairman of a state school board.
I have recently been criticized by some for strenuously objecting in print to what I believe are scientifically inappropriate attempts by some scientists to discredit the religious faith of others. However, the age of the earth, and the universe, is no more a matter of religious faith than is the question of whether or not the earth is flat.
It is a matter of overwhelming scientific evidence. To maintain a belief in a 6,000-year-old earth requires a denial of essentially all the results of modern physics, chemistry, astronomy, biology and geology. It is to imply that airplanes and automobiles work by divine magic, rather than by empirically testable laws.
Dr. Abrams has no choice but to separate his views from what is taught in science classes, because what he says he believes is inconsistent with the most fundamental facts the Kansas schools teach children.
Similarly, a recent poll finds that 50% of Americans believe Saddam Hussein possessed so-called "Weapons of Mass Destruction" at the time the U.S. invaded. Let me tell you right now, in case you have any doubt, that is categorically false, and the CIA, the U.S. Army, and even Dick Cheney agree with me. People may disagree about whether or not it was a good idea for the U.S. to invade Iraq, but they have to base their opinions on the facts. The same goes for health and medicine.
So, as for purpose and meaning, and all that. The Bible says, "You are dust, and you will return to dust." That is indeed our fate. But I intend to make a difference, and leave something behind. People who spend all their time worrying about a world beyond, instead of this one, will leave nothing.
. . . There'll be one child born, and a world, to carry on, to carry on.
Readin' and 'Ritin' and 'Rithmetic . . .
Bramwell, West and Salmon in the BMJ revisit our old friend Bayes Theorem, but in the interest of not bruising the tender brains of their physician readers, they never refer to the cipherin' preacher by name, nor do they write out any formulas. They present it only as a word problem. Guess what? Half the obstetricians in the UK don't have a clue. And, not surprisingly, their slice of the general population -- expectant couples -- almost never have a clue.
They gave all of the above -- plus midwives, who were 100% out to lunch -- the following puzzle:
The serum test screens pregnant women for babies with Down's syndrome. The test is a very good one, but not perfect. Roughly 1% of babies have Down's syndrome. If the baby has Down's syndrome, there is a 90% chance that the result will be positive. If the baby is unaffected, there is still a 1% chance that the result will be positive. A pregnant woman has been tested and the result is positive. What is the chance that her baby actually has Down's syndrome? -...........%
As I explained in the post linked above, the 90% chance that the result will be positive if a baby has Down's syndrome is called the Sensitivity of the test. The 99% chance that an unaffected baby will test negative (which these authors state as the reciprocal fraction, 1%) is called the Specificity. The 1% of all babies who have Down's syndrome is called the prior probability. The probability that a fetus with a positive test actually has Down's Syndrome is called the Predictive Value Positive (PVP) of the test.
Think about it: What is the PVP of this test, in other words, if the fetus tests positive, what is the chance it has Down's syndrome?
[Smooth jazz playing in the background (This is to signify the passage of time.)]
Okay, it's 47.6%. Less than half of the babies who test positive have the condition. However, fewer than half of the obstetricians got it right (generously defined as from 45-50%). Only 9% of pregnant women got it right.
You can read the article, or my previous post, to learn how to do this calculation correctly. But for the lazy among you, the key point is that, since 99% of babies don't have the condition (the prior probability) 99 out of 100 have a chance to yield a false positive. So if you test 100 people, even a highly specific test is likely to yield at least one false positive; whereas only one person actually has the condition, and so has the chance to become a true positive. So, in this case, about half the positive tests are wrong.
This is entirely typical of screening tests. There is a lot of pressure, much of it coming from drug companies and medical societies, to promote mass screening of the population for various diseases. You've no doubt heard the exhortations to get screened for prostate cancer or breast cancer, and seen the ads from companies that will do a full body scan to look for whatever. Indeed, in the same issue of BMJ we read this (subscription only):
By Fred Charatan
The American Journal of Cardiology is at the centre of a publication ethics row after publishing a supplement sponsored by the drug company Pfizer funded for $55 800 (£29 900; [Euro sign]43 700). The supplement contained recommendations for screening that were not only of dubious clinical worth but would have had huge financial implications for the US health budget.
Pfizer manufactures the cholesterol treatment atorvastatin (Lipitor). The supplement suggested screening asymptomatic older US men and women for evidence of coronary artery calcium, using computed axial tomography scans, and carotid intima media thickness and plaque using ultrasonography (BMJ 2006;333:168, 22 Jul).
The US Preventive Services Task Force recommended in February 2004 not using routine screening with electron beam computed tomography as it was likely to cause harms outweighing any theoretical benefits in asymptomatic older US citizens.
Long-time readers will remember my earlier discussion of GW Bush's proposal to screeen the entire population for mental disorders, using a protocol developed by drug companies; and the discussion of "incidentalomas," lesions found on images taken for unrelated purposes that lead to diagnostic tests, expense, anxiety, and even serious harm from unnecessary procedures, but which are usually benign.
Unfortunately, if doctors don't even understand Bayes Theorem, which they don't, that means that a) They overestimate the value of screening tests; and b) They misinterpret the results, and explain them incorrectly to their patients. The result is massive unnecessary, dangerous and damaging intervention with people who are, in fact, healthy.
If pregnant women considering this particular test knew that at best, it would tell them there is less than a 50% chance their baby will have Down's syndrome, would they even get the test in the first place? Once they have the test, and are told there is a 90% or 99% chance their baby will have Down's syndrome, even though the actual chance is less than 50%, has the test done good, or harm? You tell me.
Monday, August 14, 2006
Right Questions, Questionable Answers
Alastair Wood in NEJM has some correct diagnoses for the ailing drug approval process. (Free full text on this one -- NEJM has started making some of their content of broad public interest accessible, but I'm gonna keep hammering on them until they open up the whole thing.) However, I'm not buying his prescription.
Presenting problems:
- For most approved drugs, we don't have adequate long-term safety data. Sometimes we find out the hard way that widely prescribed drugs can be dangerous; no doubt many of them are and we don't even know it, because the data necessary to find that out aren't being collected.
- Sometimes drugs are approved with the requirement that the manufacturers do so-called "Phase 4" studies, i.e. longer term follow-up on safety and/or effectiveness, but these requirements are weakly enforced, at best.
- Many drugs haven't been tested head-to-head against others. To get approval, all you need is to beat placebo, or perhaps an alternative that's known to be sub-optimal. So we don't necessarily know which drug is really best.
- Drug companies have a disincentive to develop drugs for long-term prevention, because it would take so long for them to get back their investment. For diseases associated with aging, such as Alzheimers and osteoarthritis, such drugs would be a great boon.
- Drug companies have a strong incentive to concentrate on developing "me too" drugs -- slightly different versions of established drugs -- because there is little risk of hitting a dry hole, i.e. spending billions and never getting a commercial product. That steers investments away from true clinical breakthroughs, which are riskier to pursue. (He pays less attention to this, but they also invest in "evergreening" - slightly new formulations of their own drugs, and new combinations of old drugs, so they can keep exclusive marketing rights.)
Now, real clinical innovations are almost always based on publicly funded research, which the drug companies build on to develop commercial products. Obviously, we could just drive the public investment deeper toward clinically useful products, which would solve a lot of the problem. But Wood is dismissive of this idea, basically on the grounds that "it ain't gonna happen." (Smacks of creeping socialism and all that.)
As for safety, we could simply enforce existing requirements for Phase 4 studies, but Wood seems to assume that's asking too much. We could also develop improved surveillance and reporting systems for adverse events, and maybe piggyback some data collection on existing large-scale cohort studies to get better information. But Wood doesn't even consider these options. Similarly, we could just change the approval process to require head-to-head comparisons with existing standard treatments, or at least require more Phase 4 studies to do that, but again, it never seems to occur to Wood.
Nope. His solution to everything is to grant drug companies longer periods of marketing exclusivity if they will just do the right thing in each of these areas. That way, they can make more billions, just the motivation they need.
A significant problem with this idea, which Dr. Wood does not mention, is that it means the drugs will cost far more, for a longer period of time. Which means millions of people won't have access to them, the cost of health care will continue to rise, and inequalities will persist and widen. That would seem to be a downside.
So why does he focus exclusively on exclusive marketing rights and guaranteed return to investors, and ignore other solutions? Does this help to explain it?
Dr. Wood reports having received consulting or lecture fees during the past two years from Scirex, Sapphire Therapeutics, Abbott Laboratories, Elan Pharmaceuticals, NicOx, Medco, Novartis, and Eli Lilly; having acted as an adviser to various reinsurance companies regarding pharmaceutical matters; and serving as a director of Antigenics, chairman of the clinical advisory council and an investor in Symphony Capital, and a director of Symphony Neurodevelopment and Symphony Evolution. On September 1, 2006, Dr. Wood will become managing director of Symphony Capital. No other potential conflict of interest relevant to this article was reported.
Was there no-one else on the planet who could have written on this subject?
The Rules
No, this isn't about catching a man, it's about reasoned discourse and debate. As I have said many times, we welcome dissent and challenges here. True, I did become dismissive of the belief that the earth is less than 10,000 years old, but I have to draw the line somewhere. We aren't going to debate the flat earth society here, either.
However, it is clear that there is enough disagreement in society right now about the reality of evolution that serious discussion is needed, and I'm all for it. I also am politically opinionated about health care and public health policy, and also international relations and just about everything else that bears upon public health, which is, well, everything. I want debate about whatever issues come up here and I'll be particularly happy if somebody manages to change my mind or prove me wrong about something. But there is a right way and a wrong way.
For example, one commenter said, basically, "I haven't read most of your post but I know it's just the usual evolutionist smoke and mirrors." Now, how embarassing is that? Characterizing something you haven't read is just proof that your mind is closed and you have nothing to contribute. As a matter of fact, the post wasn't even about evolution, it was about the basics of cell biology -- and I posted it in order to set up the reasons why it is hard to explain the origin of life.
Others just tell me to "read the Bible." This is wrong in two ways. First, you don't know what I have and have not read. As a matter of fact, I've read the Bible from cover to cover, in two different translations. I venture to say I'm considerably more familiar with it than most people who swear by it. It's also wrong because you have to say something to convince me that I ought to trust what the Bible says. And there's no reason why I should because it was written thousands of years ago by people who knew a whole lot less than we know now. Argument must depend on facts and logic, not mere appeal to empty authority.
So here are some disapproved techniques:
- Name calling: This is very popular on talk radio and certain blogs. "Feminazi," "Commie," "al Qaeda candidate," "Elitist," etc. Just hurling an insult doesn't change any minds, although it may serve to maintain the solidarity of people who don't want to be called those names. I don't care what names you call me, and neither does anybody who thinks rationally. Ask yourself, when somebody calls somebody else a name: "Leaving out the insult, is there an argument there? What are its merits?"
- Name calling in reverse: Labeling an idea with a virtue word, such as "Resolute," God-fearing," etc. George W. Bush calls himself a Christian and says he acts on instructions from God. Why should you believe him? Anybody can make such a claim -- for example, Osama bin Laden.
- Bandwagon effect: A commenter pointed out that there are dozens of web sites extolling "Creation Science." So what? That's not an argument for anything. Half the people don't believe in evolution. So, half the people are wrong. Please move on.
- Baseless extrapolation: If people start believing in evolution, society will lose its moral compass; if gay people get married, traditional marriage will be destroyed. If we have universal health care, we'll live in a socialist dungeon. You need to demonstrate the mechanism by which these unwanted effects will occur. You can't just assert them.
- Premise shifting, changing the subject: The official definition of a troll, outside of Scandinavian folklore, is someone who tries to derail discussion by driving it down dead end paths. Usually the best way to deal with that is to ignore it entirely, or respond with equal irrelevance.
Logical fallacies, false premises and factual errors will no doubt occur. (I will no doubt make one at some point in my life.) Do your best to avoid them, i.e., to know what you're talking about and to reason carefully. However, we can deal with mistakes, and we should all promise to try to do it nicely. And, if you're uncertain of something, it's always perfectly alright to raise it as a question, or to propose it as a hypothesis.
So that's it. Bring it on.
Sunday, August 13, 2006
I believe it, and that settles it
I note that some commenters have taken to calling others "trolls." According to my Random House Unabridged Dictionary, a troll is "(in Scandinavian folklore) any of a race of supernatural beings, sometimes conceived as giants and sometimes as dwarfs, inhabiting caves or subterranean dwellings." So, that allegation seems improbable to me. I'm pretty sure these beings are mythical. Anyway, regardless of whether any of our commenters are supernatural beings from Scandinavian folklore, as I said before, no ad hominem (or ad supernaturalbeingem) attacks please. People are allowed disagree with the host here -- although threatening or otherwise antisocial comments will be deleted.
For example, I note that a commenter attributes hydrogen bonds to cheese from the Flying Spaghetti Monster's noodly appendage. This is a damnable heresy, but I stay my hand from smiting the heretic. Hate the sin, love the sinner, as they say. The truth is that the FSM is sauced with marinara, with the merest sprinkling of cheese. (Plenty of pepperoncini, however, the holy Spice of Life.) Hydrogen bonds are likely the consequence of starchiness, not cheesiness.
Finally, a commenter doesn't care how old the world is, because how is that going to prevent him from getting diabetes? Actually, I started this whole thing precisely because it will! Understanding evolution, which in turn is only possible within the context of geological time, is essential to keeping ourselves healthy. My initial point had to do with drug resistant pathogens, but I am planning to end the series on evolution with a discussion of Type 2 diabetes.
Let me take this opportunity, however, to say something briefly about Type 1 diabetes. Diabetes is actually the name for a symptom, not a disease, and there are two main diseases that cause diabetes, known as Type 1 and Type 2. (There is also gestational diabetes, during pregnancy, and some less common conditions.) Type 1 is an autoimmune disorder in which the immune system mistakenly attacks the pancreatic cells which produce the hormone insulin. Insulin is signals the body's cells to absorb the sugar glucose, the cellular fuel, from the blood stream. (Our cellular endoymbionts, the mitochondria, descendants of ancient bacteria, do the job for us of burning the fuel.) Without insulin, the cells starve.
(Type 2 diabetes is a metabolic disorder related to overweight, physical inactivity, and high proportions of sugars and simple starches in the diet with low fibre. In Type 2 diabetes, the cells stop responding properly to insulin. More on this later.)
Type 1 diabetes, like other autoimmune disorders such as rheumatoid arthritis, lupus, and multiple sclerosis, is a pretty good argument against intelligent design. The immune system evolved to protect the body against microparasites that would otherwise eat us alive. But sometimes it gets bollixed up and starts attacking tissues of the self as well as foreign invaders. Like everything else about us, it developed in small steps, by trial and error, and so it is only as good as it happens to be. It is impressive, but far from perfect. It's easy to think of ways to design an immune system that won't make that sort of mistake, just as we could design a birth canal that isn't too small for the baby's head, or a spine that is properly designed for an upright posture instead of the jury-rigged system we have that gives so many of us chronic back pain and sciatica.
But we aren't designed. Tough luck.