z-logo
Premium
Data interpretation: using probability
Author(s) -
Drummond G. B.,
Vowler S. L.
Publication year - 2011
Publication title -
the journal of physiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.802
H-Index - 240
eISSN - 1469-7793
pISSN - 0022-3751
DOI - 10.1113/jphysiol.2011.208793
Subject(s) - converse , sample (material) , statistics , interpretation (philosophy) , meaning (existential) , sample size determination , forgetting , set (abstract data type) , mathematics , econometrics , psychology , computer science , cognitive psychology , programming language , chemistry , geometry , chromatography , psychotherapist
Experimental data are analysed statistically to allow us to draw conclusions from a limited set of measurements. The hard fact is that we can never be certain that measurements from a sample will exactly reflect the properties of the entire group of possible candidates available to be studied (although using a sample is often the only practical thing to do). It's possible that some scientists are not even clear that the word ‘sample’ has a special meaning in statistics, or understand the importance of taking an unbiased sample. Some may consider a ‘sample’ to be something like the first ten leeches that come out of a jar! If we have taken care to obtain a truly random or a representative sample from a large number of possible individuals, we can use this unbiased sample to judge the possibility that our observations support a particular hypothesis. Statistical analysis allows the strength of this possibility to be estimated. Since it's not completely certain, the converse of this likelihood shows the uncertainty that remains. Scientists are better at dealing with ‘uncertainty’ than the popular press, but many are still swayed by ‘magical’ cut-off values for P values, such as 0.05, below which hypotheses are considered (supposedly) proven, forgetting that probability is measured on a continuum and is not dichotomous. Words can betray, and often cannot provide sufficient nuances to describe effects which can be indistinct or fuzzy (Pocock & Ware, 2009). Indeed, many of the words we use such as significance, likelihood and probability, and conclusions such as ‘no effect’, should be used guardedly to avoid mistakes. There are also differences of opinion between statisticians: some statisticians are more theoretical and others more pragmatic. Some of the different approaches used for statistical inference are hard for the novice to grasp. Although a full mathematical understanding is not necessary for most researchers, it is vital to have sufficient understanding of the basic principles behind the statistical approaches adopted. This avoids merely treating statistical tests as if they were a fire appliance, to pick up when smoking data need to be dealt with, and vaguely hoping you have got the correct type. Better to know how the data should be properly analysed (as it is to know which extinguisher works best). The wrong statistical approach could be like using water on an electrical fire!

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here