Premium
Development, Validation and Application of Computer‐linked Knowledge Questionnaires in Diabetes Education
Author(s) -
Meadows K. A.,
Fromson B.,
Gillespie C.,
Brewer A.,
Carter C.,
Lockington T.,
Clark G.,
Wise P. H.
Publication year - 1988
Publication title -
diabetic medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.474
H-Index - 145
eISSN - 1464-5491
pISSN - 0742-3071
DOI - 10.1111/j.1464-5491.1988.tb00943.x
Subject(s) - medicine , diabetes mellitus , documentation , insulin , reliability (semiconductor) , multiple choice , internal consistency , family medicine , consistency (knowledge bases) , psychometrics , endocrinology , clinical psychology , significant difference , artificial intelligence , power (physics) , physics , quantum mechanics , computer science , programming language
Multiple choice questionnaires (MCQs) capable of being marked manually or by a newly developed optical mark reader, or by use of an inexpensive inter‐active microcomputer system have been developed for the separate assessment of insulin‐dependent and non‐insulin‐dependent patient knowledge. Forty‐six insulin‐related and non‐insulin‐related multiple choice questions covering six main areas of knowledge were constructed for inclusion into draft questionnaires. From the responses of a total of 180 completed questionnaires, piloted in 18 randomly selected clinics in 14 Regional Health Authorities in England, psychometric analysis was performed to determine reliability, discrimination coefficients, and facility indices. Seventy‐three per cent of insulin‐dependent diabetic patients (IDDM) and 92% of non‐insulin‐dependent diabetic patients (NIDDM) MCQ correct options had facility indices within the acceptable range of 30 to 90%. 82% IDDM and 93% NIDDM correct options had discrimination coefficients exceeding 0.2. Questionnaire reliability (internal consistency) using the Kudor‐Richardson (KR20) formula was IDDM 0.87 and NIDDM 0.82. Evidence in support of the IDDM questionnaire's criterion validity was based on significant differences ( p <0.05) identified between a number of knowledge area scores stratified according to HbA 1 levels. Prescriptive correction for screen display and automatic hard copy feedback was designed for both incorrect and omitted question options, providing both educational (patient) and analytical (clinic) documentation. Both technical and psychometric properties of these knowledge assessment instruments should be acceptable for diabetic knowledge evaluation and instruction.