z-logo
open-access-imgOpen Access
Agreement, the F-Measure, and Reliability in Information Retrieval
Author(s) -
George Hripcsak
Publication year - 2005
Publication title -
journal of the american medical informatics association
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.614
H-Index - 150
eISSN - 1527-974X
pISSN - 1067-5027
DOI - 10.1197/jamia.m1733
Subject(s) - inter rater reliability , reliability (semiconductor) , measure (data warehouse) , gold standard (test) , computer science , statistic , quality (philosophy) , recall , statistics , data mining , mathematics , psychology , cognitive psychology , rating scale , power (physics) , physics , philosophy , quantum mechanics , epistemology
Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom