z-logo
Premium
Reliability of motor development data in the WHO Multicentre Growth Reference Study
Author(s) -
Onis Mercedes
Publication year - 2006
Publication title -
acta pædiatrica
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.772
H-Index - 115
eISSN - 1651-2227
pISSN - 0803-5253
DOI - 10.1111/j.1651-2227.2006.tb02375.x
Subject(s) - concordance , medicine , standardization , kappa , cohen's kappa , inter rater reliability , homogeneous , reliability (semiconductor) , physical therapy , psychology , statistics , developmental psychology , physics , thermodynamics , linguistics , philosophy , rating scale , mathematics , political science , law , power (physics) , quantum mechanics
Aim: To describe the methods used to standardize the assessment of motor milestones in the WHO Multicentre Growth Reference Study (MGRS) and to present estimates of the reliability of the assessments. Methods: As part of the MGRS, longitudinal data were collected on the acquisition of six motor milestones by children aged 4 to 24 mo in Ghana, India, Norway, Oman and the USA. To ensure standardized data collection, the sites conducted regular standardization sessions during which fieldworkers took turns to examine and score about 10 children for the six milestones. Assessments of the children were videotaped, and later the other fieldworkers in the same site watched the videotaped sessions and independently rated performances. The assessments were also viewed and rated by the study coordinator. The coordinator's ratings were considered the reference (true) scores. In addition, one cross‐site standardization exercise took place using videotapes of 288 motor assessments. The degree of concordance between fieldworkers and the coordinator was analysed using the Kappa coefficient and the percentage of agreement. Results: Overall, high percentages of agreement (81–100%) between fieldworkers and the coordinator and “substantial” (0.61–0.80) to “almost perfect” (>0.80) Kappa coefficients were obtained for all fieldworkers, milestones and sites. Homogeneity tests confirm that the Kappas are homogeneous across sites, across milestones, and across fieldworkers. Concordance was slightly higher in the cross‐site session than in the site standardization sessions. There were no systematic differences in assessing children by direct examination or through videotapes. Conclusion: These results show that the criteria used to define performance of the milestones were similar and applied with equally high levels of reliability among fieldworkers within a site, among milestones within a site, and among sites across milestones.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here