
Transfer Learning by Mapping and Revising Boosted Relational Dependency Networks
Author(s) -
Rodrigo Azevedo Santos,
Aline Paes,
Gerson Zaverucha
Publication year - 2020
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5753/ctd.2020.11371
Subject(s) - computer science , statistical relational learning , transfer of learning , dependency (uml) , relational database , artificial intelligence , machine learning , domain (mathematical analysis) , vocabulary , focus (optics) , data modeling , data mining , theoretical computer science , database , mathematics , mathematical analysis , linguistics , philosophy , physics , optics
Statistical machine learning algorithms usually assume that there is considerably-size data to train the models. However, they would fail in addressing domains where data is difficult or expensive to obtain. Transfer learning has emerged to address this problem of learning from scarce data by relying on a model learned in a source domain where data is easy to obtain to be a starting point for the target domain. On the other hand, real-world data contains objects and their relations, usually gathered from noisy environment. Finding patterns through such uncertain relational data has been the focus of the Statistical Relational Learning (SRL) area. Thus, to address domains with scarce, relational, and uncertain data, in this paper, we propose TreeBoostler, an algorithm that transfers the SRL state-of-the-art Boosted Relational Dependency Networks learned in a source domain to the target domain. TreeBoostler first finds a mapping between pairs of predicates to accommodate the additive trees into the target vocabulary. After, it employs two theory revision operators devised to handle incorrect relational regression trees aiming at improving the performance of the mapped trees. In the experiments presented in this paper, TreeBoostler has successfully transferred knowledge among several distinct domains. Moreover, it performs comparably or better than learning from scratch methods in terms of accuracy and outperforms a transfer learning approach in terms of accuracy and runtime.