Computational argumentation and automatic rule-generation for explainable data-driven modeling
Author(s) -
Luca Longo,
Serena Berretta,
Damiano Verda,
Lucas Rizzo
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3618992
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The creation of data-driven models for classification problems requires increasing transparency and inferential explainability, especially in high-stakes domains such as health-care, finance, and policy making. Rule-based systems are widely regarded as a strong candidate for the development of models that are also comprehensible to humans. However, the generated rules are often considered individually with minimal or no consideration of their interactions. This research focuses on the adoption of computational argumentation techniques, which allow for rule-interaction for enhanced explainability. In other words, rules can be revoked when new information is introduced, essentially achieving the notion of non-monotonicity. In detail, an empirical work was designed to automatically extract inference rules from datasets of various multi-class classification tasks by using the Logic Learning Machine (LLM) approach. In turn, these rules were integrated within a structured argumentation framework, able to employ abstract argumentation semantics for conflict resolution among contradicting inferences. Findings demonstrated that the LLM technique can indeed extract compact rules with varying degrees of interpretability and predictive power. Furthermore, the argument-based models built on these rules demonstrated improved inferential and explanatory performance on certain datasets. Examples show how a Cohen’s kappa coefficient improved from 0.85 to 0.99 when applying the argumentation-based conflict resolution strategy to the same set of rules generated by LLM. The contribution to the body of knowledge offered to the community is both a customisable approach for rule-extraction from datasets for multi-class problems, via hyperparameter tuning, and a transparent integration strategy with computational argumentation, which is able to enhance human understanding and support justifiability.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom