z-logo
Premium
Assessment of predictions in the model quality assessment category
Author(s) -
Cozzetto Domenico,
Kryshtafovych Andriy,
Ceriani Michele,
Tramontano Anna
Publication year - 2007
Publication title -
proteins: structure, function, and bioinformatics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.699
H-Index - 191
eISSN - 1097-0134
pISSN - 0887-3585
DOI - 10.1002/prot.21669
Subject(s) - casp , correctness , computer science , quality (philosophy) , task (project management) , set (abstract data type) , quality score , quality assessment , correlation , data mining , artificial intelligence , statistics , machine learning , natural language processing , protein structure prediction , mathematics , evaluation methods , reliability engineering , algorithm , metric (unit) , philosophy , operations management , physics , geometry , management , engineering , epistemology , nuclear magnetic resonance , protein structure , economics , programming language
The article presents our evaluation of the predictions submitted to the model quality assessment (QA) category in CASP7. In this newly introduced category, predictors were asked to provide quality estimates for protein structure models. The QA category uses the automatically produced models that are traditionally distributed to CASP participants as input for predictions. Predictors were asked to provide an index of the quality of these individual models (QM1) as well as an index for the expected correctness of each of their residues (QM2). We computed the correlation between the observed and predicted quality of the models and of the individual residues achieved by the participating groups and evaluated the statistical significance of the differences. We also compared the results with those obtained by a “naïve predictor” that assigns a quality score related to how close the model is to the structure of the most similar protein of known structure. The aims of a method for assessing the overall quality of a model can be twofold: selecting the best (or one of the best) model(s) among a set of plausible choices, or assigning a nonrelative quality value to an individual model. The applications of the two strategies are different, albeit equally important. Our assessment of the QA category demonstrates that methods for addressing the first task effectively do exist, while there is room for improvement as far as the second aspect is concerned. Notwithstanding the limited number of groups submitting predictions for residue‐level accuracy, our data demonstrate that a respectable accuracy in this task can be achieved by methods relying on the comparison of different models for the same target. Proteins 2007. © 2007 Wiley‐Liss, Inc.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here