"I have approximate answers and possible beliefs in different degrees of certainty about different things, but I'm not absolutely sure of anything, and of many things I don't know anything about, but I don't have to know an answer. I don't feel frightened by not knowing things, by being lost in the mysterious universe without having any purpose which is the way it really is as far as I can tell, possibly. It doesn't frighten me."
Since its inception, one of the cornerstones of science has been the rigorous examination and scrutiny of claims about reality. Increasingly, scientists have developed a wide variety of statistical tools used to quantitatively analyze hypotheses about the world. These include the use of so-called probability values (a.k.a. 'p-values'), which allow for the formal evaluation of certain statistical hypotheses (e.g. whether there is any evidence that measured bill lengths of birds come from two different sample populations). Scientists have established a threshold for these probability values of 5%. To somewhat oversimplify, this indicates that when using p-values, we are willing to accept that an observed difference will be a false positive 5% of the time, or 1 time in 20 events. However, one of the common critiques of the use of p-values is that those analyses that fall outside of the 5% threshold are often unreported, especially in congruence with the pressure to report positive results in an effort to publish a given paper in a more prestigious journal. This, I feel, severely handicaps our ability to perform good science and best examine the world on its own terms. In fact, it may be that the publication of negative results is just what is needed to spur a scientific field forward into new theoretical realms.
For example, suppose that a researcher is interested in understanding how a variety of abiotic factors affect the reproduction of fish in a group of stream systems. Knowing what past research has been done on these areas, the researcher measures a variety of variables including stream temperature, nutrient quality, water velocity, etc. After determining that he has a sufficient sample size given the variability the researcher has seen in the data to properly detect effects if they exist, he runs a series of analyses and finds no significant results on any of the measured variables. The researcher is then met with a choice: scrap the whole analysis, continue to use new and possibly inappropriate statistical techniques until he finds a positive result, or report the negative results found by the analyses. It is the last option that I feel too few researchers choose, and also one that is often remarkably undervalued, because it's when our previous understanding of the likely causal drivers in these sorts of systems doesn't bear fruit that we are able to more fully develop our thinking and come up with new ideas of what is driving the behavior of these systems. For instance, there may be a species of invasive predator whose feeding habits strongly negatively influences the reproduction of the fish due to heavy levels or predation or increased stress levels and their associated decrease in fecundity. If researchers do not report on these sorts of negative results, the ecological 'man behind the curtain' may never be seen otherwise.
Beyond other unfortunate effects inherent in the lack of reporting of negative results (e.g. biased meta-analyses, which are used to synthesize information across many published studies), I feel that the avoidance of publishing negative results is a ready recipe for scientific stasis, and that it would best suit science if it was stopped altogether. There are many social factors that make this difficult (e.g. the career pressure to publish in high impact journals, which is most readily done when a researcher has significant positive results to discuss), but I think that science as a whole would be supremely benefited in the long run if the trend was bucked.