z-logo
Premium
The Generality Problem, Statistical Relevance and the Tri‐Level Hypothesis
Author(s) -
Beebe James R.
Publication year - 2004
Publication title -
noûs
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.574
H-Index - 66
eISSN - 1468-0068
pISSN - 0029-4624
DOI - 10.1111/j.1468-0068.2004.00467.x
Subject(s) - generality , relevance (law) , philosophy , epistemology , functionalism (philosophy of mind) , sociology , psychology , psychoanalysis , law , political science , psychotherapist
A cognitive process will be reliable just when it yields a sufficiently high ratio of true to false beliefs. If a belief is produced by a process with a high degree of reliability, then that belief will have a high degree of justification. If, however, a belief is produced by a cognitive process with a low degree of reliability, then that belief will have a low degree of justification. After two decades of debate, a few objections have emerged as the standard objections to reliabilism. The Generality Problem is one such objection, the most visible proponent of which has been Richard Feldman (1985; Conee & Feldman 1998). It is now cited as a serious problem for reliabilism in almost every introductory text on epistemology. In this article I offer a solution to the Generality Problem. The Generality Problem arises because reliabilists claim that it is process types rather than process tokens that are the bearers of reliability. A process token is an unrepeatable, causal sequence occurring at a particular time and place. Consequently, you cannot ask whether a process token is reliable (i.e., whether it would produce mostly true beliefs over a wide range of cases). Accordingly, reliabilists have claimed that only process types can be reliable or unreliable. We can revise (R1) to take this point into account.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here