graphic

It is an odd mixture of disdain, revulsion, anger and fear that I have been imbibing with my morning coffee, and, to tell you the truth, it's already ruining my day. Not that it's been a very nice day, anyway - old Mole entertained a visiting scientist last night and might have overdone it a bit. But this news today has put me off my toast.

It seems that a well known researcher in my field has been exposed as a fake. An outright fraud, trickster, hustler, charlatan, scallywag, gutless cheat. Some might say “rat”, but I know many honorable rats, and it isn't fair to bring them into this. He didn't simply fudge his data; he made it up. Not just once, but many times. In papers I not only read and believed, but trusted (and cited). I feel sick. And very ticked off.

Of course, fakery in science is not new. Perhaps the most famous case in history is that of the fossil human skull `discovered' in 1912 by Charles Dawson and presented as the “Piltdown Man”, a supposed missing link between apes and early man. In 1917 he presented “Piltdown II” and converted many skeptics to the authenticity of these fossils. Over the ensuing years, as more human fossils were found, Piltdown Man ceased to fit into the emerging data, and in 1953 new techniques revealed it to be a fraud, most likely perpetuated by Dawson, who has a variety of other shady transactions to his name as well. (Not everyone agrees on whodunit. One line of speculation suggests that the culprit was Arthur Conan Doyle.) A reexamination of the `fossil' revealed it to be a 600-year-old human skull and a crudely filed orangutan jaw bone.

A more recent (but still ancient, pre-PubMed) case involved William Summerlin, who in 1974 presented mice that he had rendered immunologically tolerant to foreign skin grafts, which turned out to be nothing more than black paint on the white mice. Another involved a case in which a cancer-causing gene was shown to be a kinase; the tubes with the radioactive label that proved this contained label but no protein. Other cases abound, leading up to the recent Huang stem-cell fiasco (it makes me weep). In each case scientists uncovered the fakery, but how many were missed? Each time, measures were instituted to ensure that this would not happen again, and again it happened.

In the case that has ruined my day today, an editor from one of the weeklies (those journals with the nice soft pages) called to ask me why the community hadn't caught this scoundrel sooner. Why, for example, hadn't we attempted to reproduce the results? In fact, we had, and while we observed trends in his direction, we never obtained his striking effects; but then we hadn't done the same experiment he had. Why not? Well (I harrumphed), it would have taken a couple of years, and a lot of hard work, and, whichever way it turned out, would the weekly have published it? Probably not, she agreed, and we both got more depressed.

What to do? In these days of heightened security, color alerts and strip searches at airports, the answer tends to be one of reaction: we simply need more security. (Just yesterday I had a small packet of mustard confiscated as I went to board a plane, accompanied by a scowling official who reprimanded me for my carelessness - needless to say, we all felt much safer knowing that nobody could harm us at 50,000 feet with condiments.) But is this right? Shouldn't we simply implement more security in our assessment of research manuscripts before they make their way into print?

Once upon a time, when I was a mere mole-let in my first faculty job, my university proposed just such a protocol. The administration suggested forming a committee empowered with the task of investigating all raw data going into any publication that we planned to submit. Yes, I thought, a good plan, since those of us with nothing to hide would be happy to show fellow scientists how we drew our conclusions. To my surprise, my good friend, Professor Badger, who was venerated by the institution, announced that he would promptly quit if this were implemented, and the senior faculty, as one, joined him in revolt (these were more demonstrative times).

Professor Badger pointed out that such a process would necessarily take considerable time and create an adversarial process that would hamstring volumes of good science, ultimately becoming a self-fulfilling system that would justify itself by finding trivial errors and ratcheting them up to spectacular (and damaging) levels. Those who would seek administrative power within such a process would be those with a career interest in the activity. Science, suggested dear Professor Badger, cannot flourish in an environment of implied distrust. We should be critical, yes, but not to the point of assuming that everyone is guilty until proven innocent.

As it happened, he was right. The best documented example of investigations into scientific misconduct going wildly wrong is a widely publicized case of “fraud” perpetrated by David Baltimore and his colleague Thereza Imanishi-Kari in a paper they published in 1986. During the process, which involved Congressional hearings, secret service investigations, and leaks to the scientific press, Baltimore resigned his position as Director of Rockefeller University, and Imanishi-Kari was banned from doing research for ten years. Subsequently it surfaced that the government investigators themselves had committed a far more egregious (and unpunished) series of outrageous misrepresentations of their own data - suppressing evidence of the scientists' innocence and selecting results to build the case against them. In 1996, on appeal, both Baltimore and Imanishi-Kari were fully exonerated. Nobody, it seems, was watching the watchdogs, and science (and scientists) needlessly suffered.

I suspect that tightening security will only result in more such miscarriages of scientific justice. Only a few years ago, I served as an expert witness on a university investigation of alleged fraud that similarly involved leaks to the press and widespread speculation on the scientist's guilt. The investigation had dragged on for three years before I was asked to participate, and in the end the only hard evidence of guilt was an improperly run control in one experiment, which the scientist in question had chosen to interpret according to several similar control experiments conducted during the same time period (leading to a statement in a published paper that their system produced values in the same range reported extensively by others). I suggested that this was a case of a scientific decision, not fraud, and ultimately (finally) the scientist was cleared - and left to struggle for years to catch up with his research program (not to mention gigantic legal bills). This is not an answer to our problem.

So what do we do? Tighten security? Or live with fraud? Or something else? Of course, we have to do something else. Hey, this is the Mole here, I've got some ideas. But first I need to have some toast.