Most of us put great weight on scientific consensus. When the results of peer-reviewed experiments gain general acceptance, it would be silly to doubt.
Unfortunately, it is becoming harder to keep the faith when results get harder to replicate as time goes by. The reason? Blame the human factor.
Let's take a fanciful example. You're a research biologist with a "brilliant" idea: red ants are "red" because back in their evolutionary history their main preditors were colr blind! Red and green looked the same to them!
You obtain grants and embark on experiments. Your grad students return with reams of data. Are you going to pick and choose between the data to confirm your hypothesis? Of course not, you're no fraud! But there still are subjective, delicate decisions to make regarding exactly which data to report. And, after all, you do hope for positive results - results more likely to be published in leading journals.
In recent years, attempts to replicate initial findings are tending to fail. For instance, the therapeutic value of certain new antipsychotic drugs seems to be waning. A study showing a srong correlation between bodily symmetry in animals and their reproductive success seems to be falling apart. A finding - already in the textbooks - that describing a face doesn't help us remember it may be true, but it is getting harder and harder to prove.
Chance plays a role in all this, of course, but subtle selectivity seems to be a big part of the story. This is unconscious - not fraud. Australian scientist Leigh Simmons (quoted by Jonah Lehrer in a recent New Yorker) put it this way: "The act of measurement is going to be vulnerable to all sorts of perception biases. That's not a cynical statement. That's just the way humans work."