Premium
Combining Versus Analyzing Multiple Causes: How Domain Assumptions and Task Context Affect Integration Rules
Author(s) -
Waldmann Michael R.
Publication year - 2007
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1080/15326900701221231
Subject(s) - task (project management) , cognitive psychology , affect (linguistics) , context (archaeology) , inference , causal inference , psychology , domain (mathematical analysis) , causal structure , computer science , domain specificity , artificial intelligence , machine learning , social psychology , cognition , econometrics , mathematics , communication , paleontology , mathematical analysis , physics , management , quantum mechanics , neuroscience , economics , biology
In everyday life, people typically observe fragments of causal networks. From this knowledge, people infer how novel combinations of causes they may never have observed together might behave. I report on 4 experiments that address the question of how people intuitively integrate multiple causes to predict a continuously varying effect. Most theories of causal induction in psychology and statistics assume a bias toward linearity and additivity. In contrast, these experiments show that people are sensitive to cues biasing various integration rules. Causes that refer to intensive quantities (e.g., taste) or to preferences (e.g., liking) bias people toward averaging the causal influences, whereas extensive quantities (e.g., strength of a drug) lead to a tendency to add. However, the knowledge underlying these processes is fallible and unstable. Therefore, people are easily influenced by additional task‐related context factors. These additional factors include the way data are presented, the difficulty of the inference task, and transfer from previous tasks. The results of the experiments provide evidence for causal model and related theories, which postulate that domain‐general representations of causal knowledge are influenced by abstract domain knowledge, data‐driven task factors, and processing difficulty.