Premium
In Response : We are all biased, but the scientific process recognizes that and delivers despite it; still, it can do a better job—A perspective from academia
Author(s) -
Calow Peter,
Forbes Valery
Publication year - 2016
Publication title -
environmental toxicology and chemistry
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.1
H-Index - 171
eISSN - 1552-8618
pISSN - 0730-7268
DOI - 10.1002/etc.3355
Subject(s) - perspective (graphical) , process (computing) , computer science , data science , artificial intelligence , operating system
There is not much reason to believe that scientists as individuals do not suffer from the same cognitive biases and use the same heuristics (rules of thumb in solving problems) as people in general [1,2]. There is a persuasive argument that evolution shaped our rapid, intuitive responses to problem-solving based on a need to survive in an ancestral environment that is not necessarily in tune with our current needs for problem-solving in a technological age [3]. This is 1 reason why our preconceived ideas about likely adverse effects from risk agents or about the causes of observed adverse effects in people and ecosystems often turn out to be wrong. Science, the process, recognizes these fundamental biases and has delivered knowledge that has worked to improve health, communication, transport, and a host of other benefits. There are 2 important features of the scientific process. First, all preconceived ideas seeking to explain anything are assessed using evidence from carefully controlled conditions (the experimental ideal); second, this should be carried out in a way that is acceptable to the community of scientists through peer review and publication. So science is supposed to be selfcorrecting. Yet an increasing body of evidence begins to call into question the ability of science to self-correct. This includes evidence of funding bias in risk assessments [4] and, more broadly, evidence of increasing retraction rates in the primary literature that are predominantly the result of fraud [5], as well as concerns about ability to reproduce results from work in both the social [6] and natural [7] sciences. The implications are that the evidence is being collected sloppily and that factors other than those based on evidence are influencing the conclusions drawn from the scientific work. Developments such as these seem to be increasing as the opportunities to publish increase—for example, through various online venues—and as the pressures to publish for recognition, career development, funding requirements also increase. This raises deep and difficult questions for which there is unlikely to be a silver bullet solution. Yet there can be no doubt that a process of knowledge acquisition that is based on evidence is likely to lead to better conclusions for risk assessment, and environmental policy in general, than one based on intuition and negotiation. So the imperative is to improve on the science process to deliver more effectively. An important aspect of In This Issue: