z-logo
open-access-imgOpen Access
High‐Performance Psychometrics: The Parallel‐E Parallel‐M Algorithm for Generalized Latent Variable Models
Author(s) -
von Davier Matthias
Publication year - 2016
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12120
Subject(s) - computer science , algorithm , multi core processor , parallel algorithm , context (archaeology) , reduction (mathematics) , latent variable , parallel computing , variable (mathematics) , mathematics , machine learning , mathematical analysis , paleontology , geometry , biology
This report presents results on a parallel implementation of the expectation‐maximization ( EM ) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel‐E parallel‐M algorithm. Examples presented in this report include item response theory, diagnostic classification models, multitrait–multimethod ( MTMM ) models, and discrete mixture distribution models. These types of models are frequently applied to the analysis of multidimensional responses of test takers to a set of items, for example, in the context of proficiency testing. The algorithm presented here is based on a direct implementation of massive parallelism using a paradigm that allows the distribution of work among a number of processor cores. Modern desktop computers as well as many laptops are using processors that contain 2–4 cores and potentially twice the number of virtual cores. Many servers use 2, 4, or more multicore #central processing units ( CPUs ), which brings the number of cores to 8, 12, 32, or even 64 or more. The algorithm presented here scales the time reduction in the most calculation‐intense part of the program almost linearly for some problems, which means that a server with 32 physical cores executes the parallel‐E step algorithm up to 24 times faster than a single‐core computer or the equivalent nonparallel algorithm. The overall gain (including parts of the program that cannot be executed in parallel) can reach a reduction in time by a factor of 6 or more for a 12‐core machine. The basic approach is to utilize the architecture of modern CPUs , which often involves the design of processors with multiple cores that can run programs simultaneously. The use of this type of architecture for algorithms that produce posterior moments has straightforward appeal: The calculations conducted for each respondent or each distinct response pattern can be split up into simultaneous calculations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here