z-logo
open-access-imgOpen Access
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
Author(s) -
Linlin Zhao,
Wenyi Wang,
Alexander Sedykh,
Hao Zhu
Publication year - 2017
Publication title -
acs omega
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.779
H-Index - 40
ISSN - 2470-1343
DOI - 10.1021/acsomega.7b00274
Subject(s) - quantitative structure–activity relationship , categorical variable , computer science , set (abstract data type) , process (computing) , experimental data , data mining , model validation , data set , machine learning , artificial intelligence , mathematics , statistics , data science , programming language , operating system
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom