Tuesday, November 06, 2012
Publication Bias In A Society Which Says What Should Be Published
However that is not the important point.:
RF: Your paper with Roland Fryer and Steven Levitt came to a somewhat ambiguous conclusion about whether stereotype threat exists. But do you have a hunch regarding the answer to that question based on the results of your experiment?
List: I believe in priming. Psychologists have shown us the power of priming, and stereotype threat is an interesting type of priming. Claude Steele, a psychologist at Stanford, popularized the term stereotype threat. He had people taking a math exam, for example, jot down whether they were male or female on top of their exams, and he found that when you wrote down that you were female, you performed less well than if you did not write down that you were female. They call this the stereotype threat. My first instinct was that effect probably does happen, but you could use incentives to make it go away. And what I mean by that is, if the test is important enough or if you overlaid monetary incentives on that test, then the stereotype threat would largely disappear, or become economically irrelevant.
So we designed the experiment to test that, and we found that we could not even induce stereotype threat. We did everything we could to try to get it. We announced to them, “Women do not perform as well as men on this test and we want you now to put your gender on the top of the test.” And other social scientists would say, that’s crazy — if you do that, you will get stereotype threat every time. But we still didn’t get it. What that led me to believe is that, while I think that priming works, I think that stereotype threat has a lot of important boundaries that severely limit its generalizability. I think what has happened is, a few people found this result early on and now there’s publication bias. But when you talk behind the scenes to people in the profession, they have a hard time finding it. So what do they do in that case? A lot of people just shelve that experiment; they say it must be wrong because there are 10 papers in the literature that find it. Well, if there have been 200 studies that try to find it, 10 should find it, right?
I think this is almost certainly the case throughout the ologies (the non-rigorous sciences where politicians make it clear what results are desired).
It does happen even in the hard sciences. Here is Richard's Fetnman's description of how this bias slowed the determination of a cricial number in Millikan's experiment because millikan had got it wrong first. However in physics it does seem that these errors get corrected in time.
Not so in the politcal "sciences". There something called the megastudy is comon. This is a simple technique whereby "researchers" to lazy to research simply add up all the published research and average it out.
By definition if there is any publication bias at all then the result is bound to reflect that bias. Studies showing no result or the opposite result aren't published or simply aren't included.
If there is political pressure to show an effect where in fact there is none wjay we would expect is a significant number of results showing no effect (because that is what they are seeing), none or almost none showing a negative because they don't get published, and a few showing a positive result because statistically a positive result through random chance is as likely as a negative one. essentially we shoild see the right hand of a normal curve. If actual fraud is also involved we would see the number of results at the far end of the curve, which should be very small, greater than expected.
Which reminds me of the passive smoking "research". This is from the speech I made to the LibDems when I was the only person to speak at confernce against the smoking ban.
A BMJ statistical analysis found only slight statistical significance when 48 studies were combined. Looked at separately only seven showed significant excesses of lung cancer meaning 41 did not.Further the combined risk was merely 24 percent, also called a "relative risk" of 1.24. Such tiny relative risks are considered meaningless, given the myriad pitfalls in epidemiological studies. "As a general rule of thumb" says the editor of the prestigious New England Journal of Medicine Marcia Angell, "we are looking for a relative risk of 3 or more" before even accepting a paper for publication
I didn't realise it at the time but if only 7 "studies" show any effect and 41 don't then we are seeing the right hand of a normal curve.
Actually if those 7 were significant enough to bring the average risk up to that level they must have been well beyond what would appear in a normal curve and most, if not all, have been false.
The degree of political pressure to come up with such results can be shown in this response from the BMA in which they try to claim at the same time that both their own claim that passive smoking kills 1,000 across Britain annually and Jack McConnell's claim that it kills 1,000 among the 8% of Brits who libve in Scotland. The 2 are clearly incompatible not only to anybody with any scientific or medical knowledge but even to anybody capable of simple artithmetic. Whatever one may think of our doctors it is undeniable, from their response, that their professional organisation is either wholly unsceintific and innumerate or wholly and completely corrupt and dishonest, or both.
In turn this means that ANY result which is close to the edge of stratistical observation but has political support must be assrmed purely speculative until rgw=ere is a large number of completely independent double blind studies proving it. This is the method used in research of medicines and nothing else can, by definition, be believeable.
No wonder Lysenko came up with reults to "prove" his claims. The same would have happened had eugenics remained socially acceptable or telekinesis or that contact with Green party members causes cancer.
Not saying the last is true but I will say that any Green party supporter who denies it but claims the passive smoking scare is genuine is a liar.