Double-Blind Placebo-Controlled Studies

September 18th, 2009 by Potato

So I was out at a conference in Victoria, and while I’ve been to a lot of conferences before, it was the first physician-oriented scientific conference I’ve been to. I must say that the quality of the presentations is vastly different than that seen at a typical conference for scientists. The clinicians were much more confident, articulate speakers, like smooth salesmen, which stands in stark contrast to the introverted scientist reading his slides. Unfortunately, they also tended to present fairly shaky data as facts and guidance for future treatments.

For example, there were some presentations on the use of botox and acupuncture to treat chronic pain. The presentations were basically “this worked for these patients, everyone should try it.” Now, here’s the thing about research in medicine: you really need double-blind placebo-controlled studies before you can really say anything with a great deal of confidence, before you really have proof of a treatment working. When this was pointed out to one of the presenters, he countered by saying “Well, the proof is that these people keep coming back and paying for more treatments; these aren’t covered by provincial medicare. If it wasn’t working, they wouldn’t keep coming back.” A bit later in response to another question, another of these practitioners said that about 30% of the people he tried his alternative treatments on returned for more.

The thing is, there’s what’s known as the placebo effect: even if you give someone something that shouldn’t do anything to or for them, some portion of people will find some measure of effect from that treatment. The size of the placebo effect varies greatly depending on how the placebo is presented and what the placebo is acting on. The placebo effect is hard to understand, but we believe that it’s largely “mind over matter” and as such, it seems to work best on ailments that are largely in your head to begin with. If you’re sad, and a respectable looking fellow in a white lab coat hands you a pill and promises that it will make you feel less sad, you’re likely to feel less sad even if that pill is just gelatin-encased starch. Likewise with pain: from a number of studies, it seems that about 30% of people find that their pain gets about 30% better when damned near anything is tried. Pain is a complex phenomenon, but it is at least partly sensation and partly emotional, so it’s something that is easy prey for the placebo effect. Contrarily, something much more objective like a broken bone or open wound is less susceptible to the placebo effect.

So I found it rather disingenuous that when a self-selected sample of people (those who come in to a doctor’s office ready to pay for acupuncture must already believe it may work) has some measure of pain relief, that a doctor can extrapolate from that to suggest that acupuncture is a generally effective therapy for pain.

The double-blind part means that the subjects in a study must not know whether they have the real or placebo treatment: if they knew, it would really eliminate the point of the placebo. That’s blinding. Double-blinding is when the experimenter also does not know, since unconscious clues might be passed to the subjects. All important stuff in research, but let me get back to the placebo effect.

What’s interesting is that placebos are almost as effective as some FDA-approved treatments, and often with less severe side effects (though perhaps somewhat unsurprisingly, placebos also have side-effects; mind over matter cuts both ways). However, it’s generally considered unethical for a doctor to prescribe a placebo because it involves deceiving the patient.

Along with the placebo effect is the tendency for patients to lie and pretend they’re all better when a treatment is noxious. Take, for example, trepanation. Whether or not your chronic pain was cured by the medicine man drilling a hole in your head, you sure as hell were going to shut up about it or else he’d go and drill another one. I haven’t seen it reported, but I also have to wonder if there might be an under-reporting of effectiveness for some addictive treatments: could patients over-report their pain if they’re hooked on morphine, saying it isn’t working when it is in order to get an extra dose?

There was a good article about the placebo effect in Wired recently, even touching on the subtle aspects of pill design that can enhance the placebo effect.

Permalink.

Prius Magnetic Fields

June 19th, 2009 by Potato

Previously, I wrote about the fear surrounding hybrid cars, specifically the magnetic field exposure:

For the hybrid car issue, we have the question “what are the fields?” and we don’t even have a good answer to that, from which point some people fall into hysterics (up to selling their car). The real issue is then several steps removed: the Prius may have higher magnetic field exposures than other cars, and those fields have an unknown but probably small effect on human health, and that might outweigh the positive aspects of the technology.

I was understandably baffled that some people would make a mountain out of a hypothetical mole hill, especially in light of the fact that there are many other EMF “risks” that are obviously higher in everyday life, such as using a hairdryer, cell phone, or riding on a subway, which may not offer the benefits of a hybrid drivetrain. I was upset that the few people that have actually taken the measurements have not published or shared them in any way. I figured that when I eventually get a Prius for myself, I would have to borrow the magnetometers from the lab and do the job myself (and possibly get a published paper out of the deal!). (Un?)fortunately, someone has beaten me to the punch: G. Schmid and colleagues from the Austrian Research Centres in Seibersdorf have measured the fields in a Gen2 Prius under various conditions and reported the results at an international conference.

The exposure frequencies can go up to 1000 Hz due to some of the power switching. They found that near the floor in the backseat the exposure was highest, averaging 10% of the permissable general population chronic exposure according to the ICNIRP guidelines (which are frequency dependent), and could reach 30% in the maximum case (a switch from maximum acceleration to maximum braking). Even just at lap level the exposure is <5% of the guidelines (since children have short legs, this is perhaps the more appropriate measure).

They accounted for the effect of the tires (rotating tires with steel belts/cables in the makeup produces magnetic fields of up to 4% of the guideline exposure), which would be present in all cars. They also compared to some conventional cars — and the Audi A4 and VW Passat both had significantly higher exposures than the Prius! In fact, the Audi A4 exceeded the ICNIRP guideline in some conditions. The main source of exposure in those cars was from the air conditioning systems, which are “not as sophisticated” in their electrical management as in the Prius. One factor in particular that they mentioned was that the conventional cars tended to use the chassis as a current return, wheras the Prius has dedicated, shielded wiring loops that return to the battery.

In other hybrids it was found that magnetic field exposure does not correlate with installed electric motor power — the Honda Civic Hybrid has nearly 3X as much magnetic field exposure as the Prius does.

For comparison, another presenter looked at exposures on British Rail cars (not the underground — the motors are in locomotives separate from the passenger cars) and found that the fields were also in the 5-10% of ICNIRP guidelines range.

Permalink.

Peto’s Paradox

June 3rd, 2009 by Potato

Here’s an interesting question: if there’s some chance that any given cell in your body will turn cancerous per unit time, then if you have more cells, and you live longer, it follows that you have a higher chance of getting cancer. If you extend beyond a human to something big and long-lived, like an elephant or a whale, you wonder: why don’t all whales have cancer?

This is called Peto’s Paradox, and is an interesting one I just heard about.

In fact, cancer is not homogenous across species — humans get it at about 10 times the rate of any wild species. This is partly due to civilization: we don’t die as young from other natural causes, so cancer gets more of a shot to kill us, and of course our penchant for frolicking in toxins (pet dogs and St. Lawrence belugas also get cancer at a higher rate for similar reasons). But even then cancer is not homogeneous: various tissues have different propensities to cancer based in part on genetics, hormones, and environmental exposure (for instance, aside from skin cancers, there aren’t a lot of UV-light caused cancers). So in one sense part of the reason for the paradox is that one of the base assumptions — that any given cell has the same chance to turn cancerous — isn’t quite true.


But the special case of humans (and our domesticated animals) aside, why is it that a wild mouse and a wild whale still have fairly similar rates of cancer? Have whales evolved a resistance to cancer that we should investigate, or could it be related somehow to a fast/slow metabolism (there’s more than one research source that suggests a low-calorie diet for longevity). A recent paper suggests otherwise: that hypertumours (tumours that form inside other tumours) may come into play when you start dealing with larger tumours. After all, a golf-ball sized tumour can kill a person, but would probably go unnoticed in a whale, where it might take something the size of a volkswagen to sink it. In the time it takes that tumour to grow, perhaps a secondary tumour would spring up and feed on the first! It’s an interesting proposal, and the topic of a recent paper.

Complicating this is the interaction between cancer and infectious disease: certain viruses (such as HPV) can increase the likelihood of getting certain cancers. If viruses underlie more cancers than we think, it might explain why the cancer rate is so similar across species of such different sizes.

Permalink.

Answering the Question No One Dared To Ask

February 27th, 2009 by Potato

Yes, blank pages do get their own DOI classification.

Lies, Damned Lies, and the Peer Review Process

February 6th, 2009 by Potato

The peer review process is one of the most important parts of sharing scientific findings: before a paper is published, 2-3 (or more) scientists are contacted by the journal to anonymously tear the article to shreds. This process is what helps ensure that references to peer-reviewed publications are respected more than references to newspaper articles, websites, self-published books, and Wikipedia. As a scientist I’ve had the opportunity to be on both sides of the peer review process now, and while it can be a bit of a pain as an author if a reviewer is very nit-picky or worse yet, doesn’t understand something and wants to make drastic changes from the wrong point-of-view, it can help fix some common errors that can be very misleading down the road. There have been some truly terrible papers come across my desk — so bad that I often have to wonder why they weren’t stopped by an editor before even getting to peer review — but anonymity and letting mistakes that do get corrected slide is a vital part of the process. So instead I’ll just talk about a few generalities:

Statistics and their misuse are the number one weakness that otherwise good papers have. Usually it’s minor things, like saying p = x.xx instead of p < a. That's just a distinction that you typically threshold your statistics, so you choose to accept that two populations are separate when the chance of their means being that different is greater than 5% (or some other percentage, but 5% is the most common)... you don't typically say that the chance that two samples having mean differences that large coming from the same population is 4.5% or 5.1% or what have you (no matter how close to your 5% threshold 5.1% is, there are other ways to report that). Often parametric ("regular") statistical tests are run on measures that should be tested with non-parametric ("fancy but less powerful") methods. This is a pretty fine distinction as well, and we try to not usually get our heads up our asses about it (especially since many readers are only familiar with the "normal" statistical tests, so reporting those makes it easier for them to grasp... we just like to see the non-par tests done as well when it's appropriate). But there have been a few doozies, where the authors apparently learned how to do stats from the excel help files. Things like running hundreds of t-tests: roughly speaking, if you're looking for differences that have less than a 5% chance of occurring by random chance alone, and then test 100 differences, you should find about 5 "significant" results by chance alone. One paper in particular tried this trick, then had the balls to cite a (non-peer-reviewed) book justifying the move. I showed it to the statistician here, and when he got to that part he threw his keys across the room. "What the fuck!" was all he could manage to say until he calmed down. I suppose you'd have to be a statistician to feel that strongly about it. Anyway. Sometimes papers are well-written, with good references and stats, but are doomed from the start because the experiment just wasn't done very well. This happens a lot with "let's just see what this does" type studies, where no control group is included. It can also happen when systematic errors or artifacts creep in, in cases where what you're doing something (testing a drug or whatever) that affects the thing you're trying to measure directly, rather than through the mechanism it's hypothesized to. Unfortunately if someone is set on deceit it's very difficult to root out scientific fraud when all you're given is a manuscript, so the peer review process is not good at anti-fraud (and it wasn't really designed to do that, despite the burden some would like to put on it). Sometimes I'd like to see the peer review process encompass things like visiting a lab in person, and better yet to have independent replications arranged for nearly every paper published. Unfortunately the realities are that it's difficult enough for an impassioned scientist to get funding to do their project once, so it's pure fantasy to think that funding would appear to do studies twice over for replications to be done as a matter of course.