z-logo
open-access-imgOpen Access
Biostatistics series module 7: The statistics of diagnostic tests
Author(s) -
Avijit Hazra,
Nithya J Gogtay
Publication year - 2017
Publication title -
indian journal of dermatology/indian journal of dermatology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.395
H-Index - 36
eISSN - 1998-3611
pISSN - 0019-5154
DOI - 10.4103/0019-5154.198047
Subject(s) - cutoff , medicine , diagnostic test , test (biology) , predictive value , positive predicative value , disease , pre and post test probability , statistics , likelihood ratios in diagnostic testing , biostatistics , cut point , predictive value of tests , pathology , radiology , pediatrics , mathematics , epidemiology , paleontology , physics , quantum mechanics , biology
Crucial therapeutic decisions are based on diagnostic tests. Therefore, it is important to evaluate such tests before adopting them for routine use. Although things such as blood tests, cultures, biopsies, and radiological imaging are obvious diagnostic tests, it is not to be forgotten that specific clinical examination procedures, scoring systems based on physiological or psychological evaluation, and ratings based on questionnaires are also diagnostic tests and therefore merit similar evaluation. In the simplest scenario, a diagnostic test will give either a positive (disease likely) or negative (disease unlikely) result. Ideally, all those with the disease should be classified by a test as positive and all those without the disease as negative. Unfortunately, practically no test gives 100% accurate results. Therefore, leaving aside the economic question, the performance of diagnostic tests is evaluated on the basis of certain indices such as sensitivity, specificity, positive predictive value, and negative predictive value. Likelihood ratios combine information on specificity and sensitivity to expresses the likelihood that a given test result would occur in a subject with a disorder compared to the probability that the same result would occur in a subject without the disorder. Not all test can be categorized simply as "positive" or "negative." Physicians are frequently exposed to test results on a numerical scale, and in such cases, judgment is required in choosing a cutoff point to distinguish normal from abnormal. Naturally, a cutoff value should provide the greatest predictive accuracy, but there is a trade-off between sensitivity and specificity here - if the cutoff is too low, it will identify most patients who have the disease (high sensitivity) but will also incorrectly identify many who do not (low specificity). A receiver operating characteristic curve plots pairs of sensitivity versus (1 - specificity) values and helps in selecting an optimum cutoff - the one lying on the "elbow" of the curve. Cohen's kappa (κ) statistic is a measure of inter-rater agreement for categorical variables. It can also be applied to assess how far two tests agree with respect to diagnostic categorization. It is generally thought to be a more robust measure than simple percent agreement calculation since kappa takes into account the agreement occurring by chance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here