Premium
Six Years of Criticality Assessments: What Have We Learned So Far?
Author(s) -
Graedel T. E.,
Reck Barbara K.
Publication year - 2016
Publication title -
journal of industrial ecology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.377
H-Index - 102
eISSN - 1530-9290
pISSN - 1088-1980
DOI - 10.1111/jiec.12305
Subject(s) - criticality , variety (cybernetics) , computer science , value (mathematics) , risk analysis (engineering) , element (criminal law) , management science , state (computer science) , data science , business , political science , engineering , artificial intelligence , physics , nuclear physics , algorithm , machine learning , law
Summary The “criticality” of the various elements used in modern technologies is a topic of increasing interest, with groups from governments, consultancies, and academic institutions developing a variety of methodologies and using them to make assessments. Other groups from similar organizations are studying the methodologies that generate these assessments. Here, we analyze the different types of studies, review issues of methodology, and comment on features of nine different studies published between 2008 and mid‐2014. From these studies, we derive lists of problematic, debatable, and desirable aspects of criticality studies. We emphasize that the criticality of an element can vary depending on the target organization and that, because criticality is a dynamic state, it must be periodically re‐evaluated. There is substantial value to be derived if a more uniform methodology could be developed. We discuss how a harmonized methodological framework might be achieved and what its benefits could be. Putting into place such a structure for collaborative and publicly available criticality determinations would be very likely to better serve the present and future needs of corporations and governments than is the case at present, where different methodologies generate different results.