Premium
Performance tests in human psychopharmacology (3): Construct validity and test interpretation
Author(s) -
Parrott A. C.
Publication year - 1991
Publication title -
human psychopharmacology: clinical and experimental
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.461
H-Index - 78
eISSN - 1099-1077
pISSN - 0885-6222
DOI - 10.1002/hup.470060303
Subject(s) - construct (python library) , meaning (existential) , construct validity , test (biology) , psychology , face validity , interpretation (philosophy) , task (project management) , cognitive psychology , inference , computer science , psychometrics , artificial intelligence , developmental psychology , engineering , paleontology , psychotherapist , biology , programming language , systems engineering
Abstract The content, criterion and face validity of human performance tests in psychopharmacology research were examined in an earlier paper. Most evidence on test meaning and interpretation, however, comprises construct validity. This is critically scrutinized here, through a brief analysis of human performance theory (Broadbent, Sanders, Sternberg), and taxonomies of human performance function (Fleishman, Holding). While much construct evidence is rather nebulous and untestable (non‐disprovable), it is probably the most accurate representation of current views on performance assessment. Tests gradually fall out of favour (or) into favour, rather than being shown to be clearly invalid (or) valid. Factor analysis, and task discrimination, are, however, two procedures for placing test interpretation/meaning on a sounder empirical basis. Returning to the overall problem of how best to investigate test validity, several proposals are made. Further criterion information should be sought, in large‐scale studies, designed specifically to compare the practical utility of different tests and test batteries. There needs to be greater atmosphere of critical debate on test meaning, in order to clarify the performance functions being assessed. Factor analysis and task discrimination procedures also need to be more widely employed. Lastly, although many tests can be seen as measures of ‘information processing’, the inference that they provide indices of ‘real‐life performance’, can rarely be made on current evidence.