Premium
The Power of a Statistical Test What Does Insignificance Mean?
Author(s) -
MARKEL MARK D.
Publication year - 1991
Publication title -
veterinary surgery
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.652
H-Index - 79
eISSN - 1532-950X
pISSN - 0161-3499
DOI - 10.1111/j.1532-950x.1991.tb00336.x
Subject(s) - insignificance , medicine , test (biology) , statistical power , statistical significance , statistics , social psychology , paleontology , biology , psychology , mathematics
In statistical testing of data, the p value is a standard measure for reporting quantitative results. When a significant difference is reported, (e.g., P < .05), most readers understand that there is less than a 5% chance that the authors have made a type I error (false positive or a) with their conclusion. In contrast, when nonsignificant differences between treatments, groups, or parameters of interest are reported (e.g., P > .05), many investigators and readers incorrectly interpret the 95% confidence interval for this conclusion as a 95% chance of making the correct decision. In fact, the α level of significance (in this example, .05) is only one of the parameters that determines the probability of committing a type II error (false negative or β) when concluding statistical insignificance. Statistical power is the probability of having made a correct decision when the statistical tests reveal insignificance (P > .05) and the null hypothesis is true. The higher the power, the greater the chance that the decision is correct. Power depends on the α level of significance, the sample size, the standard deviation of the population or the sample, and the magnitude of the difference the investigators are trying to demonstrate.