C. Corax a few days back alerted me to the work of "Dr." Paul Cameron of the Family "Research" Institute, as published in the "peer reviewed journal" Psychological Reports. It was all very interesting and I thought I might say a few words about it once I'd had a chance to study up on it a bit more, but my hand has been forced by Michael Kranish, who has written a long feature on the subject in the Boston Globe. (Like all Boston Globe stories, this one will be free on-line for only one day, so if you're too late, too bad.)
To make a story that is far longer than it ought to be short enough to gag on, Jesus has called upon Cameron to turn society away from its satanically inspired embrace of the sodomites. His research institute exists to prove that homosexuals are unfit parents, grotestquely self-indulgent, and positively dangerous to the rest of us. His "research" proves that homosexual parents are something like 20 times as likely to sexually molest their children as heterosexuals, that homosexuals drive drunk at a rate several times that of the general public, etc. etc.
The American Academcy of Pediatrics has found that children raised by same sex couples do just fine, thank you. That was enough for a few Christian pediatricians to demostrate their Christian love by bolting AAP and founding a group called the American College of Pediatricians, which sounds very august. The ACP thinks homosexuals are weird and make bad parents, and gets itself quoted by reporters and politicians all the time to prove that pediatricians think exactly that. Obviously, the ACP thinks Dr. Cameron is the greatest genius since Einstein.
Kranish's article is Fair and Balanced. He interviews pediatrician Ellen Perrin, principal author of the American Academy of Pediatrics report, who says that Cameron's research is not scientifically credible. Kranish makes it clear that it's a lot easier to get published in Psychological Reports than in actual, real, peer reviewed journals. And he makes it clear that Cameron's beliefs are distinctly in the minority.
What he does not do is make the least effort, lift the pinky even a millimeter off the table, to tell us what, exactly is wrong with Cameron's "research." We get a version of Krugman's classic headline, "Shape of the Earth: Views Differ." Indeed, the headline of Kranish's story is "Beliefs drive research agenda of new think tanks." But the issue is, obviously, not that belefs drive their agendas: it's that beliefs drive their conclusions. This is a crucial distinction. All scientists have beliefs, and they may well be motivated to study an issue by one or another form of moral passion. But scientific inquiry refuses to respect beliefs. If the results of their research oveturn the expectations of real scientists, then they report their findings and change their minds.
It isn't worth it here to go into an extensive debunking of Cameron's work. You can go here if you're really interested. The point I want to make is that reporter Kranish, while he has sought out opposing viewpoints, has failed to do his job. This is not about a difference of opinion. It is about truth and falsehood, fact and fiction. The test of scientific inquiry is not the prestige or academic appointments of its protagonists, and scientific truth is not established by a majority vote of scientists. It is possible to examine the methods used by Dr. Cameron to reach his conclusions and to show that they are ridiculous. But Kranish did not take the trouble to do that.
The earth is round. And, at least in the modern U.S.A., kids raised by gay parents are not at any identifiable disadvantage. Now, if they were, we could have arguments about why, and until we had clear evidence, our opinions about that would no doubt be colored by our preconceptions. Some people would say, society's prejudices hurt the kids, and if we just got rid of prejudice, we'd solve the problem. Others would have other theories. But in this case, we can't even have that argument, because its premise is false.
Journalistic balance doesn't mean treating truth and falsehood with equal respect.
Sunday, July 31, 2005
C. Corax a few days back alerted me to the work of "Dr." Paul Cameron of the Family "Research" Institute, as published in the "peer reviewed journal" Psychological Reports. It was all very interesting and I thought I might say a few words about it once I'd had a chance to study up on it a bit more, but my hand has been forced by Michael Kranish, who has written a long feature on the subject in the Boston Globe. (Like all Boston Globe stories, this one will be free on-line for only one day, so if you're too late, too bad.)
Friday, July 29, 2005
Yesterday I was walking back to my office after lunch when a gleaming new gargantuan SUV went screaming through a red light with a blue light flashing on the dashboard and a siren making that Lost in Space swirling squeal. Another one came through right after, while a black helicopter scraped the rooftops then banked into a tight circle over downtown Boston. Twenty five years ago I happened to be on Connecticut Ave. north of Dupont Circle when John Hinckley shot Ronald Reagan (thereby saving his failing presidency -- remind you of any more recent events?) and this felt the same. Back then, instead of white SUVs, they were dark blue sedans, but of course SUVs hadn't been invented.
Back at my computer, I determined that a couple of tourists had their bags sent ahead to South Station. When they caught up with the luggage, they heard something ticking. The Protectorate of the Glorious Fatherland shut down Amtrak and the computer rail, evacuated the station, and sent in a robot to X-ray the bag. Tobor the Eighth Man determined that it contained a cassette player, which the tourists had absent mindedly left running.
That night on our local Fox News at 10, the helmet-haired newsbots were nearly as excited as they normally get over a bit of inclement weather, which is saying a lot. They interviewed some people who had spent an hour and a half stuck in a train in the tunnel, and others who had been herded into a pen of yellow tape by the police while they were making the world safe for democracy. All the people seemed proud to have done their bit for the Global Struggle Against Violent Extremism. A police deputy superintendent praised the tourists for coming forward and urged the rest of us to do the same.
Osama doesn't have to blow anything up. He doesn't even have to phone in a threat. He just has to sit back in his cave and watch us on Fox, wacking ourselves in our tender parts with a shiny silver, rhinestone encrusted baton.
Thursday, July 28, 2005
Revere, a while back, described the conference on "Scientific Evidence and Public Policy," sponsored by the Project on Scientific Knowledge and Public Policy, which took place in March 2003. The wheels of justice may grind slow, but the wheels of scientific publishing grind at least as slowly. Papers from the conference have just been published in a special issue of the American Journal of Public Health. I'm kind of an old fashioned guy, so intrigued as I was by Revere's post, I waited for the dead tree version to arrive in my mailbox.
As it turns out, despite the title of the conference, the major focus of the papers is not policy making by the executive and legislature so much as it is expert testimony in civil litigation. Nevertheless many of the same issues arise in all contexts. How are laypersons -- judges, juries, elected officials, political appointees overseeing regulatory agencies -- to evaluate the validity of scientific claims, and the implications of those claims for legal judgments or policy choices?
The papers mostly concern litigation in which large corporations are being sued for damage allegedly caused by toxic exposures to workers, consumers or the general public. These defendants have vast financial resources which they use to disparage the work of scientists whose findings tend to support the plaintiffs, to create an exaggerated impression of scientific uncertainty, and to create an illusion of scientific controversy where little or none exists. Of course we see this in regulatory and lawmaking contexts as well, such as the oil industry and its employees in the White House denying the reality and/or significance of global warming caused by burning fossil fuels.
Unfortunately, contrary to what seems to be the popular belief, there is no simple description of valid "scientific method," no simple screen for good vs. bad science, no paper strip that turns pink when a scientific conclusion is justified, and blue when it is not. This problem is troubling to me because I am very averse to any tyranny of expertise. The notion that we should all accept a conclusion because a distinguished panel of the National Academy of Sciences pronounces is as repugnant to me, in some ways, as the notion that we should accept pronouncements by the Pope, or the long-dead biblical scribes. Indeed, many people do not see any evident difference between the two propositions and they think of science as just another religion; so why is evolution to be privileged above creationism?
In the thimerosal case, I plead guilty with extenuating circumstances. While I have some ability to evaluate the scientific literature on this question on my own, I principally ask people to agree that there is no good evidence that thimerosal causes autism, let alone is responsible for an autism epidemic, because that is the scientific consensus, and Kennedy lacks relevant expertise. In this particular case, Eli Lilly has not had to spend millions of dollars on muddying the waters because Kennedy has no credible scientific allies. Completely independent experts, who are not paid by the vaccine industry and owe them no allegiance, have reviewed the evidence and found it wanting.
But this obviously does not satisfy the many parents who are convinced that they know why their children are autistic, who are angry about it, and whose champion is Robert Kennedy Jr. And after all, why should it? Is he not a credible person? And do people who lack exactly the right doctorates have no right to opinions about scientific questions? Where would that leave me, with my Ph.D. in social policy, writing a blog that is far more wide ranging?
This blog is largely a search for answers. Right now, I don't have them.
BTW: You can read the conference papers here, fortunately, because the American Journal of Public Health is subscription only. This is one of my biggest annoyances: scientific journals are extremely expensive, you can't buy them on the newsstand, and you can't read them on the Internet unless you happen to have a faculty appointment somewhere. On the other hand, you can read everything there is from the Discovery Institute and, for that matter, RFK, free of charge. How about the American Public Health Association putting the public in public health by letting the public read its journal?
Wednesday, July 27, 2005
Okay, here goes. As anyone who has visited this site knows, I'm not doing this because I have the least trust in the pharmaceutical industry or federal regulatory agencies. But the thimerosal controversy (which is now largely a one-person show, driven by the celebrity of a single champion), is a good opportunity to think about some of the complexities, difficulties, and also the strengths of epidemiology. It's also a chance for us to think about the nature of evidence in general.
To people who tend to share my general perspective on politics, Robert Kennedy Jr. is a sympathetic figure. He has written a compelling polemic (Crimes Against Nature) about the environmental catastrophe facing the planet. He is an attorney for the Natural Resources Defense Council, a well-established crusading Washington lobbying group. (I should point out, however, that the NRDC has not endorsed Kennedy's theories about thimerosal and autism, despite a history of being nearly paranoid about toxins getting into humans.) In his Rolling Stone article, Kennedy quite pointedly pits himself against Bill Frist, who is trying to protect Eli Lilly, the manufacturer of thimerosal, against lawsuits. While that is wrong in principle -- the proper place to resolve such disputes is in the courts -- that Bill Frist is a champion of Eli Lilly does not constitute evidence that thimerosal causes autism. Frist would be on the side of his campaign contributors no matter what.
First, if we're going to look for the cause of a disease entity, we need to define it. This is actually quite problematic for many kinds of disorders, but particularly so for ones that manifest as behavior.
The word autism actually applies to three major diagnostic entities, which are commonly referred to as "autistic spectrum disorders." This seems to imply that they represent degrees of severity of the same entity, but there is actually no evidence for this. They could have unrelated etiologies (underlying causes). Indeed, it is not at all certain that any of the three major ASDs are indeed single entities. No two people diagnosed with autism have exactly the same symptoms.
The popular image of autism is strongly colored by an entity called Asperger syndrome. People with Asperger syndrome often have above average or even superior intellectual functioning as measured by IQ. They may have superior verbal fluency and they often have strong, but unusually focused intellectual interests. However, they have impaired social talents. They are not intuitively able to read other people's feelings or detect and respond to social cues, as most people can do without even thinking about it or noticing what they are doing.
Classic autism is another matter. Children with this diagnosis usually have highly impaired verbal ability, and many do not talk at all, or merely echo what they hear. They are profoundly withdrawn, typically are very intolerant of changes in routine or strong stimuli, have great difficulty in making any sort of social contact, and often engage in repetitive, sometimes self-injurious behaviors. They typically score very low on IQ tests. But the severity of this disorder is quite variable and many children, with patient, intensive guidance and a highly structured environment do make progress in communication and social engagement. Somewhere in between is so-called Pervasive Developmental Disorder Not Otherwise Specified, which is a garbage can category for children who have autism-like symptoms but don't really fit the definition.
In his Rolling Stone article, Kennedy writes that autism "was unknown until 1943, when it was identified and diagnosed among eleven children born in the months after thimerosal was first added to baby vaccines in 1931." He obviously wants us to think that autism did not exist before that time. This is just unconscionable intellectual dishonesty. Essentially all of the psychiatric diagnoses we use today emerged during the 20th Century, and many of them have been extremely controversial. At one time, physicians did not distinguish between what we today call schizophrenia, and tertiary syphillis. That does not mean that schizophrenia did not exist prior to the 20th Century, or that it was actually syphillis. It just means that people used to think that schizophrenics were possessed by demons, or they had various other theories about people who behaved bizarrely, but they didn't call it schizophrenia or define a single set of symptoms that corresponded to the diagnostic criteria we use today.*
Many suspect that Isaac Newton and Albert Einstein may have had Asperger syndrome. There are many tales from before the 20th Century of people who may have been autistic, but a the time they were thought to have been raised by wild animals, or had their souls stolen by supernatural beings, and so on. Once diagnostic criteria for autism had been established, naturally cases began to be identified. And as always happens in such situations, as the disorder became more widely known and clinicians learned how to ascribe the label of autism to individuals, the number of cases identified tended to grow.
Once we have a method of assigning a diagnostic label, how do we establish the prevalence of a condition? Actually, it is not easy. Physicians are required to report some diagnoses, such as TB and HIV infection, but for the vast majority of diseases, there is no surveillance system.
Kennedy claims there has been an explosive epidemic of autism, coinciding more or less with the increased use of thimerosal in vaccines. However, the statistics used to establish this come from the U.S. Department of Education, and are based on reports from school districts. As should be obvious, school districts identify children as autistic when they qualify for special education services. In the most recent issue of Pediatrics, James Laidler and colleagues report that these statistics are not a useful estimate of the prevalence of autism at all. From 1993 to 2003, it appears from U.S.D.E. data that there was a shocking increase in the prevalence of autism, from 5 children in 1,000 to more than 25/1,000. But of course that isn't so. It is simply that more children are being identified with autism by schools. As Laidler points out, the states use varying definitions. Hence the prevalence in Washington is 1/3 the prevalence in Oregon.
Dr. Laidler has personal experience with this: a teacher told him that his own son was autistic, but he is not. The school district's critiera for assigning a label of autism are different from the medical criteria. Educational assessments are done for the purpose of establishing eligibility for services, not for establishing medical diagnoses.
Special studies conducted during the 1980s and early 1990s found lower rates of autism than more recent studies, so there is evidence that autism has become more prevalent in the United States, although there is not proof because it is difficult to be sure that these studies are really comparable. In any case, this obviously doesn't tell us anything about the cause. Kennedy is very enthusiastic about studies by Geir and Geir which correlate, over time, thimerosal exposure in the population with autism prevalence as derived from the Department of Education data. But as we have already seen, the USDE data is not a useful measure of the true prevalence of autism. Furthermore, as Sarah K. Parker and colleagues, writing in Pediatrics in Sept. 2004 show, Geir and Geir's estimates of average thimerosal exposure in the population are also unreliable.
This type of study is called an "ecological study." That has nothing to do with the most familiar meaning of the term, of the systematic interactions of biological species. Rather, it means that two or more factors are measured on average over groups of people, rather than being measured directly for individuals. Of all epidemiological methods, it is probably the weakest for inferring causation. Even if Geir and Geir's data were reliable, their findings would be merely suggestive. Many other things have changed in children's environment over the same time period, and this method gives us no particular reason to believe that thimerosal is responsible. It could serve to rule out the hypothesis, but not to confirm it. But in fact, the data they used are completely inappropriate for the purpose, and the study (actually they published the same results three different times) is worthless.
Next: Methodologically stronger studies; and the conspiracy theory.
*BTW: This is an excellent example of what social scientists mean by "the social construction of reality." It doesn't mean we are non-materialists, or we don't believe in the existence of a neumenon. Kennedy's mistake, in fact, is to confuse social construction with reality itself - a very lawyerly sort of mistake, in my view.
Tuesday, July 26, 2005
In the United States, something like 36% of all HIV infections are attributable, directly or indirectly (through sex with an infected person) to injection drug use (IDU). Estimates vary widely but depending on the city, the prevalence of HIV among injection drug users ranges from 5% to 40%. People with addictions who do not inject drugs also have about twice the HIV prevalence as people without addictive disorders, and people who enter alcoholism treatment have rates from 5% to more than 10%. People living with HIV have been estimated to have a prevalence of substance abuse disorders as high as 44%, with lifetime rates as high as 60%.
Rates of other mental illness diagnoses among people with HIV vary tremendously, but it appears that something like 1/3 to 1/2 meet the criteria for at least one mental disorder, which is considerably above the population prevalence of less than 25%. Substance abuse treatment programs typically find that more than 50% of their clients have other mental disorders, while mental health providers typically find that from 20-50% of their clients have substance abuse disorders. (I am indebted to a review by Klinkenberg, et al in AIDS Care 2004; 16:suppl, for a convenient summary, but these basic facts are well known by everyone who works in the field of HIV.)
Then, of course, a high percentage of people living with HIV are gay, and/or African-American or Latino.
Abe Feingold, in Mental HealthAIDS, discusses stigma. HIV/AIDS itself is a stigmatizing condition. Feingold quotes a report from the Health Resources and Services Administration. "HIV-related stigma refers to all the unfavorable attitudes, beliefs and polcies directed toward people perceived to have HIV/AIDS. . . Patterns of prejudice which include devaluing, discounting, discrediting, and discriminating against these groups of people, play into and strengthen existing social inequalities -- especially of gender, sexuality, and race -- that are at the root of HIV-related stigma." Feingold tells us that "Research has demonstrated that HIV-related stigma and discriminatory practices can negatively affect condom use, HIV-test seeking behavior, willingness to disclose HIV-positive serostatus, the pursuit of HIV-related health care, and the solicitation of social support."
A few months ago I got annoyed with the executives where I work because we needed to relocate and they weren't coming up with a new place. The vacancy rate for commercial real estate around here is huge, and landlords are begging for tenants. But it when they find out that we're serving homeless, HIV positive mentally ill drug addicts, they seem to find a reason not to rent to us.
I once interviewed a woman who concealed her HIV status from her own children. When she went to the drugstore, she'd immediately pour the pills out of the bottle and into a plastic bag so they couldn't see the label. She kept them under the bed and took them in secret. I interviewed a man who was diagnosed in the hospital after he was admitted for pneumonia. The nurse (unconscionably) told his mother that he had tested HIV positive, and she didn't speak to him for six months. I interviewed another woman who refused to accept treatment while she was in jail because the other prisoners know what HIV medications look like.
Many people I have interviewed have been ostracized by their families, have lost most of their friends. One guy is allowed to visit his brother's house when they have family get togethers, but they make him sit out on the porch, use plastic utensils and paper plates, and they won't let him hold the baby. Some people almost never leave the house. It's different in the mostly middle class, mostly white urban gay community, but African American, Haitian and Latino people living with HIV, and people of all ethnicities in small towns and suburbs, still face denunciation from the pulpit, shame and shunning in their communities.
Moral condemnation is the enemy of public health, and the opposite of compassion.
Monday, July 25, 2005
I thought that post on Bayes' theorem was gonna hit at least 8.5 on the bore-o-meter but evidently it was painless so let's keep going. Revere refers to the Positive Predictive Value. That is the percentage of all positive tests which are true positives, in other words it's the number we calculated for the Chimptastic virus test.
The Negative Predictive Value is, obviously, the percentage of all negative tests that are true negatives. As Michael Siegel points out, the Lyme Disease test has poor negative predictive value. As a matter of fact, the same thing happened to my mother as happened to C. Corax's friend. I was visiting one time and she showed me a rash. I immediately said, "You have Lyme disease." Her doctor, however, ordered the lab test and it came back negative, so he decided she didn't have it. It took a couple of months before he conceded that she did too have it after all, and meanwhile she developed some of the chronic symptoms, which it took her many months, at least, to get over.
Some tests, like the Lyme disease test, yield a dichotomous result -- the rabbit dies or it doesn't -- but some tests, such as the PSA, yield a continuous range of values and you have to decide where to set the threshold for action. By setting the threshold lower, you increase the sensitivity of the test, but lower the specificity. If you do that, you would expect the PPV to go down, and the NPV to go up. But Revere discovered that it's not necessarily that simple.
The PPV and the NPV depend on the specificity and sensitivity of the test, respectively, but also on the prevalence in the population of the condition. Just as important in deciding whether a test is valuable, however, are the consequences of false results and the benefits of true findings. These in turn depend on many factors, including whether there is an easy way to confirm or refute the initial finding, the effectiveness of available treatments (diagnosis isn't much good if you can't do anything about it), and whether the test is being used for diagnosis or screening.
With a diagnostic test, we don't start with the population prevalence because the doctor already has some reason to suspect the person has a disease. This presumably raises the PPV, although it lowers the NPV. That's why the Lyme disease test is utterly worthless. The doctor only orders it becauses Lyme disease is already suspected, but the test cannot rule it out. So the test is not providing any useful information. Doctors like tests. Tests make them feel very scientific and rigorous. But in this case, they should try using good judgment instead. Evidently that's too much to ask.
All of which brings us to the real point of this post, which is screening mammography. (Don't worry, we've almost made it to thimerosal!) For a long time, back in the swinging decade of the 1980s, it was very controversial whether all women 40 and older, or 50 and older, or of some age or other, should undergo mammographic screening. There are a number of complex issues involved, but one of the most salient is that screening mammograms find a lot of abnormalities called Ductal Carcinoma in Situ, DCIS. These are abnormal cells which are officially cancer, but they aren't invasive or metastatic. Nobody knows what percentage of them, if left alone, would go on to become harmful disease. We will never know, because every time we find them, we have to take them out, just in case. Women diagnosed with DCIS have all sorts of decisions to make -- just remove the lesion, or remove the entire breast; follow up with radiation and drug therapy, or not. It all costs money and causes fear and pain.
Anyhow, the unelected authorities who decide these matters (associations of onocologists and radiologists, the American Cancer Society, the National Cancer Institute, etc.) decided in 1988 that women should begin mammographic screening at age 40, based on calculations that it reduced the ultimate death rate from breast cancer. Then, in 1992, the National Cancer Institute changed its collective, disembodied mind, and decided that screening shouldn't start until age 50. People spent the 1990s screaming and yelling about this. I will just point out that oncologists and radiologists have an obvious conflict of interest in this whole controversy because, duhhh, mammographic screening means lots of business for them.
Anyway, comes now Joann G. Elmore, M.D., M.P.H., of the University of Washington. She and her colleagues have done what's called a Case Control Study of women who died from breast cancer between 1983 and 1998, compared with women who did not have cancer, matched for age and risk factor. It turns out that both groups had about the same rate of screening, which would seem to mean that screening has nothing to do with your chances of dying from breast cancer. (This is in the new Journal of the National Cancer Institute, subscription only, I'm afraid.)
Interestingly enough, the press release accompanying this article says that "Among women with an increased risk of the disease, the authors did see a 26% reduction in breast cancer mortality associated with screenings, but this was not statistically significant." As we have seen, just because it isn't statistically significant doesn't necessarily mean it isn't real. But what's really interesting is that the press release is wrong. The Odds Ratio was .74. That does not mean that the risk was 26% less.
Rats! We've stumbled across yet another mathematical concept. I'll explain that odds ratio thing another time, but the bottom line is, this kind of study can give us information about whether or not screening is associated with lower mortality, but it can't tell us directly how that translates into the probability that it will actually save your life. All we know is, it appears to be low. But the person who writes press releases for JNCI, doesn't understand this, which means, in turn, that any stories in the news media about this will get it wrong as well. This happens all the time.
Finally, I'm not your doctor. If you are a woman 40 or older, and your doctor is telling you to get a mammogram, you need to make up your own mind. If you haven't had babies early and often, if your mother or sister has had breast cancer, you probably should be more inclined to do it. If abnormalities in your breasts have been detected in the past, you might consider yourself in the diagnostic, rather than screening category, and continue to get mammograms on that basis.
You might value peace of mind and feel that getting the test will give it to you. Or you might have more peace of mind without it. That's your call. And this is only one study. We may know more in the future. The bottom line for me is, it's your life, it's your body. One size doesn't fit all.
Saturday, July 23, 2005
I generally stay away from personal stuff here, but just so y'all know, my brother is getting married today so I may not get back to any heavy duty posting till Monday. (Need to take care of some of the aftermath tomorrow.)
In spite of the claims of many conservative people who don't like our new, more inclusive concept of marriage here in the state where Anglo-America began -- with Puritans no less -- marriage has taken various forms and had various meanings at different times and places. It most certainly has not always been between one man and one woman. (Read your Bible!) For most of European history, it was essentially an economic arrangement, with political and military considerations coming into the picture in prominent and powerful families. For those families, it was not about parenting, since they left that to servants. And powerful men were free to procreate with any number of different women. For the great majority of people, however, the essential nature of the family was that it was the basic unit of production. Children were above all an economic asset, and cherishing them was a luxury.
Today, we think of marriage as an intensely intimate emotional alliance, and we are disappointed when that aspiration is unfulfilled. Production, of course, takes place in bureaucratic corporations for the most part. While family enterprises do survive, children don't generally get involved in them, if at all, until they are grown.
My brother and his soon-to-be spouse have lived together for several years, and they have no plans to have children. So, I suppose, their choice is as illegitimate to so-called "Christian" conservatives as gay marriage would be. After all, the most common argument I have heard against gay marriage is that marriage is supposed to be about procreation. In fact, marriage is whatever we make of it.
So congratulations bro, and Meg. I'm sure you'll continue to make it work, even though it's now official.
Friday, July 22, 2005
Don't worry folks, we're almost done. All that jive with the coin flips and the standard deviations and the p values is the ticket to half of the world of probability and statistics. You need one more ticket, and then we can travel together everywhere.
Suppose there's a rare but serious disease, Chimptastic Virus (CV), that makes you think you're Alexander the Great, causes you to lose the ability to speak in complete sentences, to stay upright on a bicycle, or to swallow pretzels. Sounds pretty serious, huh? Fortunately, only 1% of the population is infected.
There's a diagnostic test for CV which comes back positive 90% of the time when people actually have the virus. (That's called the "sensitivity" of the test, by the way.) If you don't have the virus, it comes back negative 90% of the time. (That's called the "specificity.") Sounds like a pretty good test, huh? The only bad news is that if you test positive, you have to be locked in a padded cell for the fourteen year incubation period to see whether symptoms emerge.
Okay, you take the test. Oh no! It came back positive! What are the chances that you're infected? (You have one minute to think about it. I'm going out for a cup of coffee.)
Time's up! What was your answer?
Here's mine. Or, well, actually, it's the answer of an English minister named Thomas Bayes who died in 1761. But the copyright has expired.
Before you took the test, we figured your chances were 1 out of 100, like everybody else's. 99 out of 100 people who get the test don't have CV. But, 10% of those people will test positive. On average, that's 9.9 people out of 100. One person actually has CV. That person will test positive 90% of the time. Out of 100 people who take the test, on average, there will be .9 true positives. So if 100 people take the test, on average, there will be 10.8 positive tests. So you've got 9.9 false positive tests for every .9 true positives.
.9/10.8 = .083333 (or 8.333%).
If you test positive, you have less than a 10% chance of having CV, even though the test is 90% specific!.
We can write out a formula for this, but you don't have to remember it, you just have to remember the general idea. First, we calculated the overall probability of a positive test result, which turned out to be 10.8%, or .108. Let's call that probability P(+). We already knew that the probability of being infected, all other things being equal, is .01 (i.e., 1%). Let's call that P(infected). And we know that if you aren't infected, the probability of a positive test is .1 (10%). We'll use a vertical bar -- | -- to designate the probability of one thing given another, like this: P(+|~inf) means "the probability of a positive test given that you are not infected."
So, the probability that you are infected, given a positive test, is P(inf|+), and we find it by the formula
P(inf|+)=[P(+|inf) X P(inf)]/P(+).
We already figured out that P(+) is .108, and we know that P(+|inf)=.9 and that P(inf)=.01. So the equation becomes (.9 *.01)/.108 and sure enough, that equals .0833333333333333333, or 8 1/3%, just like I said before.
Why did I inflict all this garbage on you? Because it's extremely important, when your doctor tries to sell you a screening test, that you understand what the test is really going to tell you. If the condition is uncommon in the population -- and just about everything we screen for is, including breast and prostate cancer, at least for people younger than 70 or so -- and given that very few tests have specificity much above 90%, most people who have positive results will not have the disease.
However, they will have to go through whatever happens next: fear, more tests, biopsies, expense, you name it. It isn't necessarily worth it. The usefulness of a test depends more on its specificity than on its sensitivity. The cost of a false negative is theoretically low. We didn't know before and we still don't know, but what have we lost? Of course it could give false reassurance and lead to complacency, as in the case of the very insensitive test for Lyme disease. But there is inevitably a monetary, emotional, and often a physical cost to a false positive. The rarer the condition, the more specific the test has to be to make it worthwhile.
Unfortunately, it has been found in study after study that most doctors, believe it or not, don't understand Bayes' Theorem. They think that if a test is 90% specific, somebody who tests positive probably has the disease. Now you know better.
Thursday, July 21, 2005
Okay, now that we've gotten that statistical significance thing in our tool kits, let's put it to work. I haven't managed to stimulate a whole lot of cries of outrage here, but I did once manage to get a rise out of somebody by saying that antidepressants don't work. How can that be? There are millions of depressed people who have taken antidepressants and gotten better.
Yes there are, but it turns out that people with depression also get better if you give them gel caps filled with kitty litter. In other words, there is a very strong response to placebos in depression. So the question is not actually whether antidpressants work, it's whether they work any better than burying a clove of garlic in the back yard under a full moon and then swinging a dead cat around your head three times.
The first question is, "What do you mean by 'better'?" The usual definition is a better (lower) score on somethign called the Hamilton Rating Scale for Depression. This has various items, some of which score from 0 to 2, others up to 4. The most popular 17 item version can score up to a total of 52. In some clinical trials, but not all, people improved more on one or another antidepressant than they did on placebo.
Joanna Moncrieff and Irving Kirsch, in the latest British Medical Journal, address the efficacy of antidpressants in adults. They tell us that, "Although the [National Institute for Health and Clinical Excellence]NICE meta-analysis of placebo controlled trials of Selective Serotonin Reuptake Inhibitors found significant differences in levels of symptoms, these were so small that the effects were deemed unlikely to be clinically important." So why are SSRIs supposedly so effective? Because changes in the Hamilton score are arbitrarily divided into a range considered to constitute remission, and non-remission. If we consider, for example, that a 12 point improvement is the cutoff for "success," then a patient with 11 point improvement is a therapeutic failure, while a patient who does one point better is a success.
Let's say that 50% of the people on placebo get to 12 points or better improvement, and 65% of people on SSRIs do. Then one might conclude that 15% of the people are actually benefiting from SSRIs, which is about the average finding. (I'll bet you thought it was much better than that!) Even that conclusion is largely arbitrary. And the real difference in average response between SSRIs and placebo is too small to matter.
But wait! The Hamilton Depression Scale has several questions about sleep and anxiety, so any sedative will produce improvement, even if it doesn't do anything for depression. There are other methodological problems with antidepressant trials. For one thing, antidepressants have noticeable side effects, so people in the "intervention" arm -- the people actually taking the drug -- know something is happening, while the people on placebo don't feel anything. This presumably amplifies the placebo effect, but it might do just as much good to give people something that just made them feel a little bit nauseous or light headed.
Finally, long-term results for people given antidepressants are not encouraging. Most people continue to have bouts of depression. Indeed, the only relevant study shows that people with depression who were prescribed antidepressants had worse long-term outcomes than people who weren't given them!
Bottom line? These researchers are convinced that SSRIs are worthless. And so am I.
I remain defiantly in the IDon'tKnowWhatI'mTalkingAbout zone, still not having read the NPR report discussed below (but C.Corax has what appears to be a pretty thorough summary, please see the comments). But let's presume the reporter really is saying that the observed decline in the sex ratio at birth is going to cause a major social upheaval. Manufacturers of toy trucks go bankrupt! Doll manufacturing stock soars! Sell Blue! Buy Pink!
I really don't think so. Compared with 1974, out of every 1,000 births, there are now about three more girls and three fewer boys. Nobody would even have noticed this, if we didn't register every live birth (supposedly) in the United States, and there weren't federal employees being paid to count them all.
But it's statistically significant!
Yes it is. That means, precisely, that it is unlikely to be due just to chance. It doesn't mean it's big enough to matter. How do we decide that an observed difference or association is statistically significant? Duck and cover! Here comes math! But don't worry. You don't actually have to remember all the details of this, you just need to get the general idea.
Everybody knows that if you flip a coin, it will come up heads half the time. Well, no, it won't. If you flip a coin twice, it will come up heads twice 1/4 of the time; it will come up tails twice 1/4 of the time; and it will come up heads once and tails once half the time. Here's what happens when you flip a coin four times:
If you flip a coin an infinite number of times, those bars smooth out and you get a curve that looks like this. (The bars in this image represent typical real data, which often is similar to the curve but doesn't follow it exactly.)
This is called the "normal curve." The number in the middle, at the highest point, is the mean (average) of all the values on the curve. For example, going back to the four coin flip graph, let's say we call tails zero and heads one. Then the values of the bars are 0 (four tails, on the left); 1 (3 tails, 1 heads); 2; 3; and 4. 0+1+2+3+4=10, divided by five different values=2, which is the value of 2 heads and 2 tails, the middle bar. Tah dah! (Each of these values is traditionally called a value of "X").
It turns out that if you take the difference between each value of X and the mean, add all those up, divide by the number of values, and take the square root of the whole thing, you get a number called the standard deviation (sd). In a normal distribution, about two thirds (.6826) of all values are within one sd of the mean; about 95% (.9545) are within two sd of the mean; more than 99% (.9973) are within three sd of the mean. This is true of all normal distributions.
So, there's one more thing you need to know. If you take random samples from some population, for example if you call people at random and ask them their height, the sample means won't be exactly the same as the true mean of the population, but they'll tend to be close. Specifically, they'll be normally distributed, with the mean of all samples the same as the true population mean. The standard deviation of the sample mean is called the Standard Error.
If I take two samples, and their means are more than two standard errors apart, it's quite unlikely -- less than a 5% chance -- that they really are from similar populations. When the sample size "n" is larger, the standard error tends to be smaller. Of course, it's not that simple because I don't actually know the population mean, which I need to calculate the standard error, so I have to estimate it from the samples. That makes the test a little weaker, but I can still calculate how accurate my estimate of the standard error is likely to be, and come up with a probability that my underlying populations really are the same even though my sample means are different. That probability is called p.
So, we can think of each year of birth records as a kind of sample from all of the years in which women have been giving birth in the U.S. What is the probability that the tiny difference between 1974 and 2002 in the sex ratio is due to chance? It happens to be very small, but that's only because there are a helluva lot of births every year. Remember that standard errors go down as numbers go up (because, in the formula, we divide by n, the number of cases). Arbitrarily, we say that a difference is "statistically significant" when p is less than 5%. When n is large, a small difference can be "significant," but it might be so small that nobody could possibly care about it. When n is small, a large difference might not be "statistically significant," but that doesn't mean it isn't real -- it just means our sample was too small for us to prove it to an arbitrary standard of probability. It might still matter a lot.
Soon, I will once again be entertaining.
Edited 7/22 for clarity
Wednesday, July 20, 2005
C.C. asks about a story she heard on National Pubic Radio (did I just make a typo?) regarding the sex ratio at birth in the U.S. I didn't hear the story, so I'll just have to take her word for it about the content.
The National Center for Health Statistics, which keeps track of data from birth and death certificates, among other duties, issued a report on June 14 about long-term trends in the sex ratio at birth in the U.S. (In case I haven't bored you enough already, the report is here (pdf).) For background, you need to know that, as we have known for hundreds of years actually, in humans slightly more male babies are born than female babies.* In the U.S., since about 1974, the excess of male babies has been declining, however. Not by much -- you'd hardly notice it -- but there are a helluva lot of babies born in the U.S. so even slight differences are statistically significant. Specifically, the sex ratio declined from 1.053 in 1970 to 1.048 in 2002. Not a big deal.
Biologically, the sex ratio tends to decline with the age of the mother, with the age of the father, with the number of babies the mother has had previously, with lower maternal weight, with stressors such as earthquakes and economic catastrophes, and with environmental toxins. The CDC report mentions these factors, but comes to no conclusions about the reasons for the overall decline. (Increasing average maternal age isn't enough to explain it.)
Apparently, the NPR reporter suggested that the declining sex ratio since 1974 has something to do with a higher percentage of mothers being single, and said something about tearing at the social fabric and biology punishing us for our sins or something to that effect. Evidently the reporter has some theory about unmarried women having fewer resources and therefore being under more stress, or weighing less, or some combination.
If I'm hearing this straight, it is one of the most egregious examples of reportorial malpractice I have heard of, and that's saying a lot. There is not the slightest suggestion of anything like that in the CDC analysis. The sex ratio at birth in the U.S. of 1.048 is lower than the ratio in Colombia (1.058), Egypt (1.058) El Salvador (1.063) and many other countries where presumably most married women face more material deprivation than most single women in the U.S., not to mention emotional stress from endemic violence, in the case of Colombia. Why a lower sex ratio at birth "tears at the social fabric" is unclear to me in any case. This "journalist," in the guise of reporting on an official data release, has made up a story to promote his or her personal moral crusade. Fortunately, this is a blog, so I can say stuff like that without actually knowing what I'm talking about, because again, I didn't hear the story. If anyone did, and we're being unfair, please weigh in.
UPDATE: A commenter reveals that the above representation of this story is not accurate. Still, the story seems rather dodgy to me. Please read the comments.
BTW -- the Haloscan comment counter, for reasons beyond my control (you'll have to ask Haloscan) is usually wrong. You will often see 0 comments when there are, in fact, quite a few. You may also see 3 comments when there are a dozen. So please check the comments! They're the best thing here.
*(However, men are weak and women are strong, so more male babies die, and the tendency of males to die more often, all things being equal, continues throughout the life span. That's why there are more widows than widowers. Of course, there are some societies in which girl babies are preferentially neglected or even killed, but that's another story.)
Maybe even more likely to bore you, it's dirty work, but somebody has to do it.
In public health, there is a high probability that we are speaking in terms of probabilities. One way we talk about probabilities is in terms of risk. What is your annual or lifetime risk of breast cancer? How is your risk different depending on your reproductive history, your body mass index, your age? Of course, more than likely, we're actually talking about the probability that you will be diagnosed with the disease, not the probability that you have it. That probability depends on how hard you, or your doctor, look for it, as well as whether or not you have it.
But very often, we are dealing with what I might call meta-probabilities. Much of our information does not come from counting up facts about 100% of the population. Instead, it comes from samples that we believe are "representative" of some population. The probabilities we derive from those samples are only estimates, but we try to be very precise about what we mean by estimation. We talk about confidence intervals -- the probability that an actual risk lies between an upper and lower boundary. But is it meaningful to talk about the probability of a probability? Now this starts to get philosophically complicated. Can this sort of finding properly be called "knowledge"?
When the hairhatted weather android on TV says that "the probability of precipitation is 50%," is what he says meaningful? It might rain and it might not, but I already knew that. What if I said that when the weather forecaster says the probability of rain is 50%, there's a 50% probability that it's actually more than that, but there's a 67% probability that it's less than 56%? Does that mean something? I'm not sure, but we make analagous statements in public health all the time. The bottom line is still the same, it might rain and it might not. But we still watch the weather forecast, and depending on what the weatherdroid says, we do or do not carry an umbrella.
In public health, we hope that people will do the same -- eat what we advise, get certain contaminants out of the air, water and food, get the shots, take the pills, get the tests, exercise, have the surgery, or not, whatever it may be. But we tend to do a really lousy job of explaining ourselves.
So be warned. I'm going to try to explain things better. Be prepared for some serious wonkery. It it's boring, skip it and come back in a couple of days. If you like it, your probably need to get a life, but welcome to the club.
First, a track back to the Dharma Bums, and specifically this post.
Rexroth's Daughter is on the list for those Zogby Internet polls, and this one basically asks some questions about drinking water treatment the answers to which are completely obvious. Although there's a bit of obfuscation, the question boils down to, "Would you prefer to drink a glass of water that contains harmful byproducts of disinfection and/or may be contaminated with cryptosporidium or mercury; or one that doesn't have any of those contaminants?" Then it asks whether your city should sell you potentially contaminated water or not, and whether the EPA should allow drinking water to be contaminated. Well duhhh.
I said that this is not exactly a push poll because they aren't using a script to steer people to specific answers. Instead, they know in advance what the answers will be without having to push.
In fact, this is not a poll at all. It is an exercise intended to support a marketing and advertising claim. Someone is manufacturing a system for treating water, and they want to be able to claim that 99% of people polled support their system. We all get bogus polls like this in the mail. (I mostly get them from the Democratic Senatorial Campaign Committee and similar organizations.) The main purpose in direct mail campaigns is to get people to open the envelope and send in a contribution, but they may also use the "results" (fully known in advance) to make claims about "respondents to a mail survey." In these cases, the wording of the questions is also preposterously tendentious.
Real polling requires that:
- Respondents are selected at randomfrom a known population. Or, more precisely, that every member of that population has either an equal, or a known probability of being selected.
- The wording of questions (called "items") does not suggest to respondents what the pollster thinks the answer ought to be, or what the "right" answer is.
- The response categories that are offered are exhaustive (every possible answer is available, or at least you get to opt out with "not applicable" or "don't know") and mutually exclusive (if the pollster doesn't offer the opportunity to pick more than one answer, then it must be impossible for more than one answer to be true.)
- The order of questions is such that an earlier question is not likely to influence the answer to a later question.
There are various other considerations, such as so-called "socially desirable response bias" -- i.e., it isn't very useful to ask "Do you think Black people are inferior" because most people, even if they do think that, won't say so.
But the first requirement -- the random, or probability sample -- is essential or all the rest is meaningless. The purpose of a real poll is to estimate the percentage of people in a defined population who hold certain opinions or beliefs, or who report certain experiences or circumstances. Actually "survey" is a broader term. By a "poll" we usually mean a specific kind of survey that focuses on opinions. But surveys are used to study health conditions, health care access, hazardous exposures, and all sorts of other issues in public health.
So, up soon, a discussion of concepts of probability and statistics that are important in public health.
Tuesday, July 19, 2005
I owe the world a post on the scientific investigation of thimerosal and autism. Although I empathize with the world's painful anticipation, I must first prepare the way with a few remarks about epidemiology and public health science.
People complain all the time, "First they tell us that everything cause cancer, then it doesn't after all, then maybe again it does a little. They tell us that the orange and green pills are good for us, then they tell us they're killing us. They keep changing what we're supposed to eat, what tests we're supposed to get, how fat we should or should not be. Fuhgedaboudit, from now on I'm ignoring it all."
Actually it's not quite that bad. Not quite. After all, people used to think that the earth was at the center of the universe, and the sun and the planets and the stars went around us. Now we know better, but just because we used to believe otherwise doesn't mean there is any doubt that the earth goes around the sun. Science advances, and sometimes ideas are overthrown.
One reason epidemiology is different from cosmology, though, is that people act on current beliefs in ways that have a direct and immediate effect on our health and even our survival. Sometimes those beliefs contain a substantial degree of uncertainty, but decision makers feel compelled to go with their current best estimates.
Epidemiology also happens to present some particular difficulties. Only rarely are there ethical concerns about experiments in particle physics, energy fields, or gravitation (beyond the issues of appropriate allocation of scarce resources), but experimenting on human beings is definitely a thicket of ethical brambles. That means we often have to depend on observing relationships among variables such as toxic exposures and health outcomes under real world conditions that can't be controlled. Conclusions drawn from that sort of research are usually stated in terms of probabilities, but even those probabilities depend on the assumption that we haven't overlooked some factor that is associated with the supposed risk factor we are studying, that is the real cause of the outcome.
New drugs are generally approved only after experimental trials, but there are several difficulties here as well. One of the most important is that in an experiment, you set out to study certain pre-defined, hypothesized outcomes. It is easy to overlook adverse effects that you don't happen to be looking for, and even if you do notice certain adverse effects that the experiment wasn't designed to study, it can be unclear whether they really are caused by the drug. Perhaps they affect only a subset of all people, such as women or men, or people with other pre-existing conditions, or specific genetic endowments. Then they might affect too few of the study participants to be noticed at first, but become clear when more people take the drug, or even when the original data are re-analyzed. By the same token, associations that are noticed can turn out to be spurious.
Uncertainties can ultimately be resolved by new experiments specifically designed for the purpose. Overlooked associations can emerge clearly once we have much larger numbers to analyze. In the meantime, vested interests (such as, obviously, drug manufacturers) may be in a position to pick and choose from the available data in order to create a misleading impression of certainty or uncertainty, or may simply be able to bamboozle the public which has very limited access to the original information and no ability to interpret it.
Nevertheless, the truth is out there, and sometimes we find it. At least we become sufficiently sure of it that it's not worth investigating any further. Maybe we really are just software running on a superduperultracomputer and the whole universe, including the earth and the sun and the stars, doesn't even exist. But that possibliity is not worth worrying about, especially since there doesn't seem to be a whole lot we can about it anyway. The possibility that thimerosal has caused an epidemic of autism is in the same category.
I'm a glutton for punishment. I read the NYWT and the Boston Globe every day; I listen to Every Teeny Weeny Little Thing Considered; most days, just to get an even better idea of how the average person is being brainwashed, I'll check out a network news show. It's understandable that they missed this story, because Anonymous Senior Administration Officials haven't told them about it on double double super secret background, but the BBC, which uses less scrupulous journalistic methods, managed to ferret it out.
Niger children starving to death
By Hilary Andersson
BBC News, Maradi, southern Niger
Children are dying of starvation in feeding centres in Niger, where 3.6m people face severe food shortages, aid agencies have warned.
The crisis in the south of the country has been caused by a drought and a plague of locusts which destroyed much of last year's harvest.
Aid agency World Vision warns that 10% of the children in the worst affected areas could die.
They say the international community has reacted too late to the crisis.
Niger is a vast desert country and one of the poorest on earth. Millions of people, a third of the population, face food shortages.
There's more. It's hard to read.
Monday, July 18, 2005
Direct to consumer advertising of drugs such as Vioxx and Prempro (see below) that turned out to be positively dangerous has certainly attracted plenty of attention. But the intention of DTC drug advertising is not to sell us poisons -- that was accidental.* The intention is merely to rip us off.
No doubt you've heard as much about the Purple Pill as you have about Wendy's Bacon Double Cheeseburger and the latest method of making your bathroom bowl sparkle. A month's supply of Nexium costs $171. A month's supply of Prilosec, an older medication which can be purchased over-the-counter, costs $24. According to clinical trials, of people who take Nexium, 60-70% achieve complete relief of their symptoms, whereas, of people who take Prilosec, 60-70% achieve complete relief of their symptoms.
The difference? Nexium is still under patent. Prilosec (omeprazole) is a generic drug with multiple manufacturers.
If you want to learn about the drugs available for various conditions, with clear comparisons of price and effectiveness, from people who aren't out to rob you, try Consumer Reports, which has set up a web site for you. Great idea, huh?
Consumer Reports best buy drugs
*On second thought, that applies to HRT only. In the case of Vioxx, a case can be made that they did have a pretty fair idea of what they were doing.
What do I know about it, being as I will never experience it, but I'm going to talk about it anyway.
Everyone will remember that Hormone Replacement Therapy (HRT) was intensively marketed to women on the premise that it prevented the supposed negative effects of menopause -- in particular, the increased risk of heart disease and stroke that women face as they grow older. These benefits were initially observed in a cohort study called the Nurse's Health Study, but a later randomized controlled study called the Women's Health Initiative, and another study called the Heart Estrogen/progestin Replacement study, found just the opposite to be true. HRT also increases the risk of breast cancer.
Nevertheless, if you're one of those unfortunate people who watches television, you have no doubt seen the continuing Direct to Consumer Advertising for Prempro, the major HRT product, now marketed exclusively to combat symptoms of menopause. Thanks to the drug company lawyers worried about massive lawsuits, and an FDA now slightly more inclined to cover its ass, these ads say enough about the risks that at first I could not imagine they would persuade anyone to actually take the stuff, but then I realized that the imagery, sunny disposition of the spokeswoman, and overall subtle manipulation is probably far more powerful than the actual information content for many viewers.
Now it turns out that if you take the stuff, you're likely to experience symptoms of menopause when you stop anyway -- in other words, you're just delaying them. But Diana B. Petitti, M.D., writing in last week's JAMA, in a confusing essay which seems to be trying to support continued prescribing of Prempro while presenting nothing but very good reasons why nobody should come within forty feet of the stuff, notes that
[S]tudies have shown that women who are randomized to the placebo group of [sic] trials investigating a variety of treatments for menopausal symptoms often improve," and that "Accumulating evidence suggests many symptoms commonly attributed to estrogen deficiency [i.e., menopause] are not. . . [Of] symptoms commonly attributed to menopause, including . . .hot flashes and night sweats, vaginal dryness, sleep disturbances, mood symptoms . . . cognitive disturbances, . . . back pain, tiredness, stiff joints, urinary incontinence, and sexual dysfunction, [an NIH panel] concluded that the evidence established causality only for [hot flashes and night sweats], and vaginal dryness."
For many years, HRT was touted as a miracle cure for aging. It was supposed to keep skin youthful, protect against cognitive decline, keep women sexy and sexual -- and in fact, there are still various quacks promoting it for those purposes. It's pure crap.
As we grow older, we change. Some of the changes we like -- hopefully we get wiser and find more equanimity -- some of them we don't. Most people would like it if their bodies didn't start to wear out and run down, and I won't argue with that. But menopause, as such, it seems to me is just a part of life. It causes some transient annoying, but seldom disabling symptoms. Then you can't have babies any more, but on the other hand you no longer have the curse. That's it. There's no need for a magic pill. And Wyeth should stop selling the stuff. Now.
Friday, July 15, 2005
The Washington press corps -- by which I mean the representatives of the corporate media, including the Associated Press, the television networks, the New York Times and those few other newspapers and newspaper chains who still have their own reporters in DC -- should agree on the following policy.
"Senior Administration Officials" -- which means, essentially any political appointee including cabinet secretaries, administrators of independent agencies, and their deputies and assistants, official spokespersons for government agencies, political operatives of all stripes, and of course the president's staff including the office of the White House counsel, the chief of staff, and the National Security Agency -- will not be quoted nor will their remarks be paraphrased, discussed, represented or alluded to, unless the speakers or writers are identified. There is no reason I can think of why it is in the public interest for the phrase, "A senior administration official, who did not wish to be identified, said . . ." ever to appear in any broadcast or print report by any journalistic outlet which expects to be viewed as legitimate or credible. Nor any phrase with similar meaning.
Whistleblowers, as they call them -- people who reveal information to reporters that powerful people do not wish to be revealed -- are a different category. Bill Keller may not be able to grasp the difference, but obviously, I'm a lot smarter than he is. Or maybe just more honest.
I probably won't be back until Sunday, as is usual this time of year. Stay cool.
He begins by describing a meeting in Georgia of "top government scientists and health officials," which he says was held at a Methodist retreat center "to ensure complete secrecy." Oh yeah, there were representatives of vaccine manufacturers there also. It was all "embargoed." Photocopies were prohibited. They were there to discuss a study by CDC epidemiologist Tom Verstraeten, which indicated a link between the preservative thimerosal in vaccines and autism. He supposedly cited a "staggering number of earlier studies" which had indicated a link with speech delays, Attention Deficit Disorder, hyperactivity, and autism, and now, supposedly, he had nailed it down. Kennedy claims in this passage that since 1991, when the FDA recommended additional vaccines for young infants, the incidence (apparently -- he just says the "rate") of autism had increased by 15 times, from one in 2,500 children to one in 166. According to his account of the meeting, the participants focused exclusively on how to cover up these findings, and make sure the public never found out about them.
I haven't had time to even think about reading the transcript of this meeting, which is hundreds of pages long, but the Terrifying Brigand gives a link to a blogger who has. Anyone who is home sick can read the transcript at safe minds dot org. It turns out, according to people who have read the actual transcript, that the "embargo" was, as is the usual meaning of the term, temporary. Verstraeten's results were to be released later that month. It also turns out that the scientists weren't interested in "covering up" these sensational results -- they just doubted them. Most of them were not convinced that any link between thimerosal and autism had been established and they discussed further investigation, not deception.
Now I'm not surprised that Kennedy was able to pluck, from those 250 pages, three or four remarks from people who seemed to be interested in covering their asses or who were concerned about public reaction to the suggestion that thimerosal causes autism. But suppose that is indeed an accurate reflection of the tenor of the entire meeting. Is that in itself evidence that thimerosal causes autism? Of course not. It is merely evidence that some people at the meeting thought it might, back in 1991, and reacted in a less than admirable way. Kennedy is setting up a universe in which government scientists work on behalf of pharmaceutical companies, and are so dedicated to the welfare of the corporations that their unquestioning instinct is to lie to the public when the corporate interests are threatened.
It is true that the FDA (not the CDC, which has other problems) has been corrupted by its relationship with the drug companies. It is a classic problem in politics that regulatory agencies are often captured by the industries they regulate. It is very important for the public -- both consumers and physicians -- to understand this problem, and it is essential that we fix it. But the fact of drug company influence in the federal scientific establishment only makes the idea that the government could have downplayed the risk of thimerosal plausible; it does not make it so.
Next: The scientific record
Thursday, July 14, 2005
By Kristen Wyatt, Associated Press Writer | July 14, 2005
Two weeks ago, [William] Crutchfield [of Snellville, Georgia] walked down his driveway carrying a .380-caliber pistol and greeted his mail carrier at the curb. He then opened fire on Lazenby, drove to the police station in his Chevrolet Cavalier and told the secretary, "I just shot the letter carrier."
"He took his mail and then said, 'Hello.' And then just started shooting," Lazenby said from his hospital bed Tuesday. "He just casually got in his car and drove away."
Lazenby was shot seven times, once in the arm and six times in the abdomen. A neighbor heard shots, came outside and called 911 as the 52-year-old grandfather lay in the grass of a nearby lawn thinking he might die.
When Lazenby came out of surgery hours later, he learned that he had suffered extensive damage -- 29 holes in his colon and intestines, shattered bones in his arm. He would live, but he would never be able to digest food or produce insulin by himself.
Meanwhile, Crutchfield was telling police his startling motive. It had nothing to do with Lazenby, but instead was a way out of medical debt, he reportedly said.
"He was saying that he wanted to be cared for by the federal government, that he was in poor health and wanted to be taken care of," said Atlanta postal inspector Tracey Jefferson.
Crutchfield, a 60-year-old electrical contractor who lived alone, claimed $90,000 in medical debts for an unspecified ailment and feared losing his home, another postal inspector testified at his preliminary hearing.
"He felt that it was better to be in federal prison than out on the street," postal inspector Jessica Wagner said.
Well, looks like his problem is solved.
Speechless's comment on the previous posting reminds me of research done back in the '50s by my late mentor Irving Kenneth Zola. Irv interviewed patients at a clinic in Boston, back when immigrants to our town meant Irish, Italian, and Eastern European Jewish. He told me that quite a few of the patients told him "Dr. Zola, you've helped me more than any of the other doctors." That was just because he listened to them, of course -- he was a Doctor of Philosophy, who didn't give them any medical advice at all.
Anyway, although it wasn't the original intent of his research, Irv noticed something quite interesting: the physicians were much more likely to apply a diagnostic label indicating a psychogenic origin for symptoms to patients were Italian or Jewish, than to patients who were Irish or Anglo. (E.g., psychosomatic, hypochondriacal, anxiety, hysteria, etc.) After interviewing the doctors and making some observations of his own, he thought he knew why. The Italian and Jewish patients were far more emotionally demonstrative, more likely to express pain and to complain openly about their symptoms. The Irish and Anglo patients were more stoical and reticent about their suffering.
The physicians, who were Anglo and upper class Jewish (as opposed to the working class Jewish immigrant patients) perceived the Irish and Anglo patients as behaving appropriately, whereas the Italian and Jewish patients were hysterical and overwrought. So here we have the first (as far as I know) recorded discovery of cultural incompetency in medicine. Of course, it's been long forgotten.
P.S. Back in those days, social scientists didn't have computers. Irv had to set up his crosstabulations and do his chi squares by hand. A little while later he went back and did a three dimensional crosstabulation and he discovered something else: the difference was mostly accounted for by female patients. It was the Jewish and Italian women who were being called hysterical. So that was the right word choice.
Last year, a physician friend of mine told me about an amazing article he had read in a pediatric journal, about the "Ay ay ay" syndrome. It seems that Latina mothers in the pediatric emergency room would shout "Ay ay ay" as their children lay bleeding on the gurney. This was labeled by the authors as a form of pathology.
You can go to Harvard, do a pre med course and graduate summa cum laude; spend $250,000 on a medical degree; do three years of residency and two years of fellowship; and still be nearly as ignorant as a President.
While the blogosphere is obssessed with the mystery of why the Queen of Mesopotamia languishes in Sir Patrick's dungeon, while Count Novakula continues to flit freely around the televisions studios sucking blood, I thought I'd propose a few mysteries of my own. These are problems of at least as much importance to the health of the population as the cause of Alzheimer's Disease or finding a magic bullet to kill cancer cells, but they don't get half as much media attention, money, or prestige, as all those biomedical breakthroughs, half of which end up being bogus anyway (take it from an Assistant Professor).
Why do Black men have such a dramatically higher incidence of prostate cancer than white men? (See below.) Why are African Americans at such higher risk of hypertension? (It's not straightforward genetics, Africans in Africa are not prone to hypertension.) Why is the African American infant mortality rate so disproportionately high?
On the other hand, why do Mexican American women have low infant mortality and comparatively good birth outcomes, even though their poverty rate is higher than that of African American women and they are even more likely to have little formal education? On the other other hand, why is the Puerto Rican infant mortality rate disproportionately high? Why do Latinos in the U.S. have such a disproportionately high risk for Type 2 diabetes?
Why do African Americans and Latinos get worse medical care than non-Hispanic whites, even when they have insurance and go to the same hospitals and HMOs?
Why don't half the people take their pills the way they're supposed to?
Why do people wait until the visit is basically over and they're halfway out the door before they mention to the doctor what is really bothering them? (If they ever do.) Why do doctors ask 90% of the questions? Why don't people say anything when they don't understand what the hell the doctor is talking about?
Why do people -- mostly male -- kill themselves? Why do people -- mostly female -- cut themselves? (Neither behavior makes obvious sense in terms of evolution, or for that matter, intelligent design.)
What's up with the placebo effect?
Anybody else got questions to add to the list?
Wednesday, July 13, 2005
Prostate cancer is the most common cancer in men in the United States. According to data from a cancer surveillance program called SEER, age-adjusted incidence rates rose by 108% from 1986 to 1992, and then fell sharply. (The incidence rises sharply with age, so the total incidence is not the same as your own risk, which depends on how old you are.) The incidence of diagnosed prostate cancer varies quite substantially by race, for unkown reasons. According to the National Cancer Institute, "For white men, the [age adjusted] incidence rate peaked in 1992 at 185.8 new cases per 100,000 men before dropping 27 percent to 135.3 new cases per 100,000 in 1994. Incidence in African American men peaked in 1993 at 264.7 cases per 100,000 before declining 11 percent to 234.4 cases per 100,000 in 1994."
Pretty strange, huh? Was there something in the water in the late 1980s, which wasn't there after 1992? That could explain the election of George Bush the first, and his failure to secure a second term, but it does not explain the changing incidence of prostate cancer. For that, we have a diagnostic test to thank, called the Prostate Specific Antigen (PSA). In the late 1980s, doctors sharply increased their use of the test, so they started finding more cancers; then, after they had found a good percentage of them, there were fewer out there still to find and the incidence fell. The fact is that a large percentage -- possibly a majority -- of men over 70 who die of other causes, who are autopsied, turn out to have prostate cancer. Most prostate cancers are "indolent" -- they grow very slowly, they are non-invasive, and they do not metastasize.
In other words, you officially have a disease, but you do not have an illness. You have no symptoms, you do not perceive that anything is wrong with you -- until you have the PSA test and the doctor tells you, "You may have cancer." "May," because having an elevated PSA level doesn't mean that you necessarily have cancer, it just means that you might. But now, all of a sudden, you are sick. You need a biopsy, which costs money, hurts, and scares the shit out of you. Maybe the biopsy is positive. Now you're really scared, because you officially do have cancer!
Of course, so do most guys your age, the difference is that you know about it. Unfortunately, the doctor cannot tell you whether your cancer is going to become metastatic and cause a very unpleasant death; or just sit there quietly for the rest of your life until you die at age 97 in a windsurfing accident. But you might die of cancer! So now you need surgery which can leave you incontinent of urine and unable to, uhh, you know.
By the way, there is absolutely no evidence that widespread prostate cancer screening has led to a reduction in prostate cancer mortality. As a matter of fact, there is no relationship between the rate of PSA screening in a given state or country and trends in prostate cancer mortality. Although the mortality rate did decline in the early 1990s, for reasons which are not terribly complicated but which slightly exceed the word count limit of the blogging format, it is implausible that this was related to PSA screening.
Perhaps it is not a surprise that oncologists think that men over 50 should be offered the PSA test -- it brings them lots of business after all -- but when your doctor offers you the test (or your spouses doctor, if you discuss such things with him), here's something you should know. The test misses a lot of high grade tumors which are, in fact, dangerous; and falsely signals the presence of tumors when there aren't any. According to a recent article by Thompson, et al in JAMA 2005;294:66-70 (the initials used to stand for Journal of the American Medical Association but now they don't stand for anything -- although the American Medical Association does stand resolutely for physicians' right to make a whole lot of money), of men who started out with normal PSA levels and were screened annually by PSA and that really fun procedure where the doctor sticks his finger up your ass, 65% ended up getting at least one biopsy, which according to the protocol was triggered by a PSA level over 4 nanograms per mililiter, a number I'm sure you really care about, or an abnormality found on digital rectal examination. Of these, about one fifth ultimately received a diagnosis of prostate cancer, of which about one quarter had characteristics indicating that they were very dangerous.
This study included a biopsy for all the participants at the end of five years, which enabled the investigators to find out what was really going on. With the commonly used PSA cutoff of 4.1 ng/ml, 21% of cancer cases would have been detected, and 6.2% of men without cancer would have had false positives, resulting in unnecessary biopsies. Note that as this age group included men as young as 55, the majority of them did not have prostate cancer, therefore most of the biopsies that were performed would have found no cancer. Lowering the cutoff to 1.1 ng/ml would have found 83.4% of cancer cases but 61% of men without cancer would have had false positive results, which essentially means the test would be useless.
Men in the United States have a 17.3% risk of being diagnosed with prostate cancer, but only a 3% risk of dying from the disease. Yet, of men who are diagnosed and undergo removal of their prostate gland, 35% have recurrence of cancer. As the authors note, "An inherent property of all screening tests is that they disproportionately enhance the detection of slower-growing cancers, because more agressive tumors have a greater likelihood of becoming clinically apparent between screenings."
So guys, you now have the info. Make your own decisions. Don't let anybody, even one with a white coat and a fancy degree, tell you what to do.
Tuesday, July 12, 2005
Some of you have no doubt noticed the full-page ads in major metropolitan newspapers, the letters to the editor, and reports of parent activists and others alleging that use of a preservative containing the compound ethyl mercury in childhood vaccines caused a massive epidemic of autism in the United States. Promotion of this theory has been spearheaded by Robert Kennedy Jr., who published a powerful polemic about it in Rolling Stone magazine. I did a google search on this article's title and after clicking through 15 pages of search results, I found innumerable writers, bloggers and organizations who touted Kennedy's charges, and not a single attempt at refutation or even critical consideration.
If, instead of doing a google search on Kennedy's popular article, one does a PubMed search on Thimerosal and adverse effects, or Thimerosal and autism, one enters an alternate universe, one where this theory has been diligently studied, at great expense over many years, and has been found to be utterly without merit. However, I cannot give you links to most of the relevant literature because it is in medical journals which are available to subscribers only -- and subscriptions cost hundreds of dollars a year. As a medical school faculty member, I have access to on-line subscriptions, but most of my readers do not, and if I give you my password, I'll potentially be in trouble. The scientists who did the relevant studies and reviews
have not, so far as I know, attempted to publish anything in Rolling Stone, have not gone on talk shows, and have not written any letters to the editor of the New York Times. Anyway, most people would not be able to understand the medical journal articles and certainly would find them boring and nearly impossible to read.
As an exercise in public service, for what it is worth, I will in the next day or two put up a comparison of Kennedy's key points with the mainstream scientific beliefs about this, and then perhaps readers can think about these issues for themselves. For now, what I wish to emphasize is simply that we have a major problem of communication between scientists and the rest of the world. This is why the Bush administration can get away with flat earth theories about such matters as global warming, mercury pollution, etc. (in which it is doubtful that they truly believe, but which are convenient to their corporate sponsors), while half of Americans do not believe in evolution. At the same time, because science is not democratic and is not open to citizen participation or input, it fails to serve the public interest.
I truly believe this is one our most critical social problems. What can we do about it?
39,000 killed in continuing violence
By Irwin Arieff at the United Nations
NEARLY 40,000 Iraqis had been killed as a direct result of combat or armed violence since the US-led invasion, a figure considerably higher than previous estimates, a Swiss institute reported today.
The public database Iraqi Body Count, by comparison, estimates that between 22,787 and 25,814 Iraqi civilians have died since the March 2003 invasion, based on reports from at least two media sources.
No official estimates of Iraqi casualties from the war have been issued, although military deaths from the US-led coalition forces are closely tracked and now total 1937.
The new estimate of 39,000 was compiled by the Geneva-based Graduate Institute of International Studies and published in its latest annual small arms survey, released at a UN news conference.
It builds on a study published in The Lancet, a British medical journal, last October, which concluded there had been 100,000 "excess deaths" in Iraq from all causes since March 2003.
The Swiss institute said it arrived at its estimate of Iraqi deaths resulting solely from either combat or armed violence by re-examining the raw data gathered for the Lancet study and classifying the cause of death when it could.
As you know, we're fighting the terrorists over there so we don't have to fight them here at home. I don't recall hearing of any Iraqis volunteering to be massacred on behalf of that premise, but why should they complain? They've been liberated.
Monday, July 11, 2005
The public demands a posting on vaccination. So here goes. The word derives, of course, from the first reasonably scientific demonstration of an effective immunization method. The English physician Edward Jenner heard from a female dairy worker (what was then called a milkmaid) of a popular belief that infection with cowpox -- a disease of cattle that produces mild symptoms in humans -- confers immunity from smallpox. He decided to try it out by first infecting some orphan children with cowpox, and then infecting them with smallpox, to which they proved to be immune. This experiment would be unlikely to receive IRB approval today, but that's water over the dam -- it works.
Although governments throughout Europe and the United States became enthusiastic proponents of vaccination, there was also popular resistance from the very beginning, despite the terrible scourge that smallpox represented. Some people ridiculed the idea; others decried it as unnatural. Anti-vaccination movements did not become highly visible, however, until smallpox had become rare in the U.S. and Europe. Then movements arose that opposed compulsory vaccination as a plot against the working class, or a conspiracy of orthodox physicians to monopolize the profession. In 1905, the U.S. Supreme Court ruled in favor of government power to compel vaccination. Nearly all states, however, still allow a religious exception.
Development of additional true vaccines had to wait until the 20th century, when we began to understand virology and immunology, for it was only a happy accident of nature that a benign disease existed which was closely enough related to smallpox to produce cross-immunity. In the case of other viral diseases, it was necessary to develop synthetic vaccines based on weakened or "killed" viruses, or viral fragments, which could stimulate an immune response without producing disease. The most storied, advance, of course, was the development of polio vaccine, but in the latter half of the 20th Century, effective vaccines were developed for most of the common childhood viral diseases. I can still remember when Rubella, the so-called German Measles, left thousands of children with severe birth defects, including blindness, deafness, and profound mental retardation. That tragedy is now nearly forgotten history.
Yet today, measles and other common, vaccine preventable diseases kill literally thousands of children around the world every day, a fate which is almost unheard of in the wealthy countries. The benefits to individuals and society of universal vaccination seem obvious, impossible to deny. Yet antivaccination campaigns continue. Some people adhere to so-called "alternative" medical theories, and essentially disparage all forms of allopathic medicine. ("Allopathic" is an essentially value-neutral term for the practice that is taught in medical schools and accepted as scientifically valid medicine today; the term's historic origins are interesting but there is no space to go into that here.) Others, however, accept science and argue in scientific terms, but they reach conclusions which are radically dissident.
Like any medical procedure, there is some risk associated with almost every form of vaccination. Vaccinations can, obviously, give a child a sore arm and sometimes a mild disease with fever. More serious complications, particularly neurological side effects, can occur rarely. There have been instances of errors in manufacturing -- as in a case where some children got poliomyletitis from a batch of bad vaccine -- and cases of vaccines that were non-sterile. Indeed, Jenner's cowpox vaccines, maintained by a chain of transmission from human to human, was often contaminated and spread other diseases even as it prevented smallpox. Some vaccines used in the past would not be approved today, and there have been a couple of instances of post-marketing withdrawals, most famously for a vaccine against rotavirus which caused a bizarre, unexpected side effect. (It is very odd indeed that a vaccine could do this, but the side effect was a "telescoping" of a portion of the small intestine, with one segment folding into another.)
Nevertheless, one must compare these risks to the danger of actually getting the diseases vaccines are designed to prevent, and there is simply no contest. On balance, the vaccines in use today are overwhelmingly beneficial to the recipients, and just as much so to the "free riders" who refuse them, but whose children are not exposed to potentially serious illnesses because their schoolmates are immunized. While we usually think of measles, mumps and rubella as just serious nuisances, in fact all three diseases can have serious complications which make the risk of adverse effects from vaccination seem inconsequential by comparison.
There are major problems, however, with the vaccine infrastructure. Vaccines are not very profitable, it turns out, and very few companies are currently in the business. We saw the consequences of this last year when a manufacturer of flu vaccine in Britain had its license suspended and there was a temporary shortage. Fortunately it was not a very bad year for flu and in the end, we found ourselves with surplus vaccine and no epidemiological disaster. The world, however, simply has no means of producing enough vaccine should a highly virulent strain of human influenza appear, as many fear is very likely to happen soon.
So now comes this controversy over the Measles-Mumps-Rubella (MMR) vaccine and autism. In my last posting on this subject I suggested I had not researched this question enough to draw conclusions. I have now remedied that failing and I will write on the subject anon.
Sunday, July 10, 2005
I'm sorry for being scarce lately. The overload alarm on my bullshitometer has been going off incessantly, and it's keeping me up at night. I could just turn the damn thing off but I'm afraid of being overhwhelmed by a massive, ineluctable tide of bullshit and drowning in my sleep.
Anyway, the past few days have given us all a profound lesson in how the generally accurate reporting of True Facts can constitute a massive deception. Was the attack on London mass transit on Thursday the most important thing that happened in the past four days? Applying the usual quantitative tools of epidemiology, obviously not. We now believe the death toll was approximately 75. In 2004, 514,250 people died in England and Wales. 65% of these were of people older than age 75, but that still leaves about 180,000 of what might be called premature deaths. More than 11,000 people in the UK died from "accidents" (a politically incorrect term in public health), most of them relatively young, and more than 3,000 of these died in "land transport accidents," the majority of them young men. At least 2,500 took their own lives (many suicides cannot be distinguished from accidents). While 75 murders would be a trivial number in the United States, that is not true in the UK, where there were only 131 confirmed homicides in 2004, although there were more than 900 "events of undetermined intent," some of which are unsolved murders.
Another way of putting all that is that this event will be barely noticeable in the overall risk of travel in the UK. This simple numeric comparison, obviously, doesn't tell the whole story. I'll get to the moral and political issues momentarily, don't worry, but let's try to remain dispassionate a bit longer.
A big problem is uncertainty. This was an intentional act, and nobody knows whether further, perhaps even more deadly attacks will ensue. This troubles me also, particularly in that the methods used by Islamist jihadists recently have been notably inefficient. There are quite a few simple means of killing larger numbers of people, and no, you don't need any high tech Weapons of Mass Destruction™. Given the complexity of modern infrastructure and the potential of what are today commonly available technologies, it only takes a few people to produce impressive catastrophes. There are 6 1/2 billion people in the world, so it's a bit of a surprise, actually, that we haven't seen even more such events. This is definitely a problem, but hardly anyone is talking about it sensibly. It's not a "war," and it's not a "clash of civilizations." It's a property of industrial civilization that puts unearned power in the hands of any class of radically disaffected people or sociopaths, including Christian terrorists such as Tim McVeigh. But that doesn't mean we're at war with Christianity.
In fact, we've seen quite a lot of comparable events recently, but this particular one received highly disproportionate attention. They happen in Iraq almost every day. To be fair, the first few massive car bombings in Iraq after Mission Accomplished got quite a bit of press attention, but interest has faded quickly as they have become routine. By orders of magnitude, more people -- including completely innocent people -- have been killed by American bombs, tank shells and rifle bullets in Iraq than were killed on July 7 in London. Supposedly the U.S. does not intend these deaths, they are "collateral damage," and therefore not morally reprehensible. Certainly they are of almost no interest whatever to the U.S. corporate media.
The London attack, much to the glee of Fox News announcers, blew the G-8 summit off the front pages, and along with it the two issues at stake there: the fate of human society in Africa, and the fate of the planetary environment. An enormous catastrophe happened at the G-8 summit, a terrorist attack on all of humanity. The leader of 5% of the world's population, that consumes 25% of the world's petroleum production, refused to do anything to reduce his country's use of fossil fuels. (We'll talk about Africa another time.) Oh yeah, this is the same guy who ordered those bombs and shells and bullets for Iraq.
We have a good deal to be concerned about these days. Religious fanatics with bombs are somewhere on the list. What we need to do about that is the same thing we do about all categories of criminals. Solve the crimes, capture the perpetrators, and prosecute them. It seems pretty clear to me that if we are diligent about that, this problem will remain somewhere around number 17, well after influenza and drunk driving. Our own leaders, however, say that relying on that approach is treasonous, cowardly, and worst of all, liberal. When somebody commits a mass murder in your country, the manly thing to do is find some country to drop bombs on, launch cruise missiles at, invade with tanks, and then rewrite it's laws to permit unlimited foreign investment. If Tony Blair can't come up with a suitable place to bomb, he'll end up looking almost as French as John Kerry.