Premium
In reply: Determining the sample size in a clinical trial
Author(s) -
Kirby Adrienne,
Gebski Val,
Keech Anthony C
Publication year - 2003
Publication title -
medical journal of australia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.904
H-Index - 131
eISSN - 1326-5377
pISSN - 0025-729X
DOI - 10.5694/j.1326-5377.2003.tb05241.x
Subject(s) - citation , sample (material) , sample size determination , computer science , library science , information retrieval , psychology , statistics , mathematics , chemistry , chromatography
SAMPLE SIZE MUST BE PLANNED carefully to ensure that the research time, patient effort and support costs invested in any clinical trial are not wasted. Item 7 of the CONSORT statement relates to the sample size and stopping rules of studies (see Box 1); it states that the choice of sample size needs to be justified. 1 Ideally, clinical trials should be large enough to detect reliably the smallest possible differences in the primary outcome with treatment that are considered clinically worthwhile. It is not uncommon for studies to be under-powered, failing to detect even large treatment effects because of inadequate sample size. 2 Also, it may be considered unethical to recruit patients into a study that does not have a large enough sample size for the trial to deliver meaningful information on the tested intervention. Components of sample size calculation The minimum information needed to calculate sample size for a randomised controlled trial in which a specific event is being counted includes the power, the level of significance, the underlying event rate in the population under investigation and the size of the treatment effect sought. The calculated sample size should then be adjusted for other factors, including expected compliance rates and, less commonly, an unequal allocation ratio. Power: The power of a study is its ability to detect a true difference in outcome between the standard or control arm and the intervention arm. This is usually chosen to be 80%. By definition, a study power set at 80% accepts a likelihood of one in five (that is, 20%) of missing such a real difference. Thus, the power for large trials is occasionally set at 90% to reduce to 10% the possibility of a so-called " false-negative " result. Level of significance: The chosen level of significance sets the likelihood of detecting a treatment effect when no effect exists (leading to a so-called " false-positive " result) and defines the threshold " P value ". Results with a P value above the threshold lead to the conclusion that an observed difference may be due to chance alone, while those with a P value below the threshold lead to rejecting chance and concluding that the intervention has a real effect. The level of significance is most commonly set at 5% (that is, P = 0.05) or 1% (P = 0.01). This means the investigator is prepared to accept a 5% (or …