Premium
Reviewing studies with diverse designs: the development and evaluation of a new tool
Author(s) -
Sirriyeh Reema,
Lawton Rebecca,
Gardner Peter,
Armitage Gerry
Publication year - 2012
Publication title -
journal of evaluation in clinical practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.737
H-Index - 73
eISSN - 1365-2753
pISSN - 1356-1294
DOI - 10.1111/j.1365-2753.2011.01662.x
Subject(s) - reliability (semiconductor) , usability , quality (philosophy) , set (abstract data type) , face validity , applied psychology , computer science , content validity , psychology , management science , psychometrics , clinical psychology , engineering , human–computer interaction , power (physics) , philosophy , physics , epistemology , quantum mechanics , programming language
Rationale, aims & objective Tools for the assessment of the quality of research studies tend to be specific to a particular research design (e.g. randomized controlled trials, or qualitative interviews). This makes it difficult to assess the quality of a body of research that addresses the same or a similar research question but using different approaches. The aim of this paper is to describe the development and preliminary evaluation of a quality assessment tool that can be applied to a methodologically diverse set of research articles. Methods The 16‐item quality assessment tool (QATSDD) was assessed to determine its reliability and validity when used by health services researchers in the disciplines of psychology, sociology and nursing. Qualitative feedback was also gathered from mixed‐methods health researchers regarding the comprehension, content, perceived value and usability of the tool. Results Reference to existing widely used quality assessment tools and experts in systematic review confirmed that the components of the tool represented the construct of ‘good research technique’ being assessed. Face validity was subsequently established through feedback from a sample of nine health researchers. Inter‐rater reliability was established through substantial agreement between three reviewers when applying the tool to a set of three research papers (κ = 71.5%), and good to substantial agreement between their scores at time 1 and after a 6‐week interval at time 2 confirmed test–retest reliability. Conclusions The QATSDD shows good reliability and validity for use in the quality assessment of a diversity of studies, and may be an extremely useful tool for reviewers to standardize and increase the rigour of their assessments in reviews of the published papers which include qualitative and quantitative work.