Open Access
Effects of sample size and distributional assumptions on competing models of the factor structure of the PANSS and BPRS
Author(s) -
Tueller Stephen J.,
Johnson Kiersten L.,
Grimm Kevin J.,
Desmarais Sarah L.,
Sellers Brian G.,
Van Dorn Richard A.
Publication year - 2017
Publication title -
international journal of methods in psychiatric research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.275
H-Index - 73
eISSN - 1557-0657
pISSN - 1049-8931
DOI - 10.1002/mpr.1549
Subject(s) - sample size determination , econometrics , statistics , psychology , sample (material) , ordinal data , normality , scale (ratio) , rating scale , mathematics , chemistry , physics , chromatography , quantum mechanics
Abstract Factor analytic work on the Positive and Negative Syndrome Scale (PANSS) and Brief Psychiatric Rating Scale (BPRS) has yielded varied and conflicting results. The current study explored potential causes of these discrepancies. Prior research has been limited by small sample sizes and an incorrect assumption that the items are normally distributed when in practice responses are highly skewed ordinal variables. Using simulation methodology, we examined the effects of sample size, (in)correctly specifying item distributions, collapsing rarely endorsed response categories, and four factor analytic models. The first is the model of Van Dorn et al., developed using a large integrated data set, specified the item distributions as multinomial, and used cross‐validation. The remaining models were developed specifying item distributions as normal: the commonly used pentagonal model of White et al.; the model of Van der Gaag et al. developed using extensive cross‐validation methods; and the model of Shafer developed through meta‐analysis. Our simulation results indicated that incorrectly assuming normality led to biases in model fit and factor structure, especially for small sample size. Collapsing rarely used response options had negligible effects.