z-logo
Premium
Statistical Tests and Retention of Terms in the Additive Main Effects and Multiplicative Interaction Model for Cultivar Trials
Author(s) -
Cornelius P. L.
Publication year - 1993
Publication title -
crop science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.76
H-Index - 147
eISSN - 1435-0653
pISSN - 0011-183X
DOI - 10.2135/cropsci1993.0011183x003300060016x
Subject(s) - ammi , multiplicative function , statistics , type i and type ii errors , mathematics , iterated function , monte carlo method , cultivar , interaction , yield (engineering) , main effect , biology , horticulture , gene–environment interaction , mathematical analysis , materials science , biochemistry , gene , genotype , metallurgy
The additive main effects and multiplicative interaction (AMMI) model has been recommended for cultivar trials repeated across locations and/or years. Previous studies, using approximate F ‐tests introduced by Gollob, have declared more AMMI interaction principal components (PCs) significant than cross validation could show to predictively useful. This study used Monte Carlo simulation to investigate whether such a result in an international maize ( Zea Mays L.) yield trial of nine cultivars in 20 environments could be wholly or partially explained by liberality of the Gollob tests and also to compare properties of Gollob tests and several more conservative procedures. Gollob tests were found extremely liberal (Type I error rate as high as 66% when the first interaction PC in a 9 by 20 table is null) and AMMI users are warned not to rely on them. Tests known as F GHI and F GH2 were essentially equivalent and effectively controlled Type I error rates at or below the intended level, but were conservative for any component for which the previous component was small. Simulation tests and iterated simulation tests with greater power than F GHI and F GH2 , but apparently with adequate control of Type I error rates, were developed. Simulation results suggest that Fant or F GH1 could usually be used to choose a predictive model with only a small loss in accuracy, and sometimes a gain, as compared to the expected model choice by cross validation with half of the data used for modeling and the other half for validation. In some cases cross validation is likely to choose a model with fewer PCs than the optimal truncated model obtainable from the full data set. If cross validation is used to choose a model, it is recommended that all but one replication should be used for modeling and only one for validation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here