z-logo
open-access-imgOpen Access
DEVELOPMENT OF MEASURES FOR THE STUDY OF CREATIVITY 1
Author(s) -
Frederiksen Norman,
Ward William C.
Publication year - 1975
Publication title -
ets research bulletin series
Language(s) - English
Resource type - Journals
eISSN - 2333-8504
pISSN - 0424-6144
DOI - 10.1002/j.2333-8504.1975.tb01058.x
Subject(s) - quality (philosophy) , set (abstract data type) , aptitude , psychology , factoring , test (biology) , scale (ratio) , applied psychology , creativity , statistics , computer science , social psychology , developmental psychology , mathematics , paleontology , finance , philosophy , physics , epistemology , quantum mechanics , economics , biology , programming language
Research on creative thinking has been handicapped by lack of adequate criteria. The purpose of this study was to develop a set of tests that could be used as dependent measures in evaluating training or in other research on “creativity.” Four tests were developed, called Formulating Hypotheses, Evaluating Proposals, Solving Methodological Problems, and Measuring Constructs. They are job‐sample tests that present realistic tasks such as a behavioral scientist might have to deal with. A scoring method was developed that requires the scorer to assign responses to categories of responses rather than to make subjective evaluations. The categories are assigned scale values based on independent evaluations by an expert panel. Scores can be assigned by computer. Six scores were studied: (1) average quality of the responses the examinee thinks are best, (2) average quality of all responses, (3) average quality of the best response by category scoring, (4) number of responses, (5) number of unusual responses, and (6) number of responses that are both unusual and of high quality. The tests were administered to about 4,000 applicants for admission to graduate school, using an item‐sampling procedure. The tests were found to be appropriate in difficulty for advanced students. Reliabilities of most scores were high enough to be useful. Factoring of score intercorrelations reveals a general number‐of‐responses factor and two quality factors that are defined by quality scores from different combinations of tests. The number scores are quite independent of conventional aptitude and achievement tests, and quality scores have a substantial amount of true variance not predicted by aptitude and achievement tests. The face validity of the tests seems to appeal to students and teachers, but evidence of construct validity is needed.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here