Funding corrupts science: Research Psychologists screw the scrum.

This does not shock me. At all. There are always of designing projects that will increase the chances of a positive result. There is also publication bias: it is far more difficult to get a negative result through peer review, or a result that contracts the field. And that is why the Open Science Report is important.

Screenshot from 2015-08-29 09-31-41

I think we knew or suspected that the literature had problems, but to see it so clearly, on such a large scale — it’s unprecedented,” said Jelte Wicherts, an associate professor in the department of methodology and statistics at Tilburg University in the Netherlands.More than 60 of the studies did not hold up. Among them was one on free will. It found that participants who read a passage arguing that their behavior is predetermined were more likely than those who had not read the passage to cheat on a subsequent test.Another was on the effect of physical distance on emotional closeness. Volunteers asked to plot two points that were far apart on graph paper later reported weaker emotional attachment to family members, compared with subjects who had graphed points close together.A third was on mate preference. Attached women were more likely to rate the attractiveness of single men highly when the women were highly fertile, compared with when they were less so. In the reproduced studies, researchers found weaker effects for all three experiments.The project began in 2011, when a University of Virginia psychologist decided to find out whether suspect science was a widespread problem. He and his team recruited more than 250 researchers, identified the 100 studies published in 2008, and rigorously redid the experiments in close collaboration with the original authors.The new analysis, called the Reproducibility Project, found no evidence of fraud or that any original study was definitively false. Rather, it concluded that the evidence for most published findings was not nearly as strong as originally claimed.

Source: Many Psychology Findings Not as Strong as Claimed, Study Says – The New York Times

How can you do this?

    You can choose measures which bias towards the results you have. Less of a problem in clinical work, where the measures are standardized and new outcomes are carefully tested

  1. You can have non parallel treatments. The classic is using wait list or treatment as usual as the control condition. That does not allow for the effect of being seen — which is commonly called the placebo effect
  2. The cohort of people who undertake these experiments has changed: there are cohort effects relating to generations, and there are differences in vulnerability to mental illness, or reaction to social cues, with age
  3. Some of the old research was not properly consented, used deception, and would not meet modern standards

Most of my research is meta analysis. If I don’t have multiple studies I don’t have data to crunch. Some of my research is in psychiatric epidemiology: I know far too much about scales and reliability of interviews… and now really want to see hard data. Such as death rates. And some of my work is doing clinical trials. Which you can do with minimal funding. If you don’t have the FDA on your back: if you have a network of collaborators, and you have a salary, so you are not dependant on getting funding.
For funding has the field corrupted.