Over What Range Should Reliabilists Measure Reliability?

More Info
expand_more

Abstract

Process reliabilist accounts claim that a belief is justified when it is the result of a reliable belief-forming process. Yet over what range of possible token processes is this reliability calculated? I argue against the idea that all possible token processes (in the actual world, or some other subset of possible worlds) are to be considered using the case of a user acquiring beliefs based on the output of an AI system, which is typically reliable for a substantial local range but unreliable when all possible inputs are considered. I show that existing solutions to the generality problem imply that these cases cannot be solved by a more fine-grained typing of the belief-forming process. Instead, I suggest that reliability is evaluated over a range restricted by the content of the actual belief and by the similarity of the input to the actual input.