z-logo
open-access-imgOpen Access
Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference
Author(s) -
Yufei Feng,
Xiaoyu Yang,
Xiaodan Zhu,
Michael Greenspan
Publication year - 2022
Publication title -
transactions of the association for computational linguistics
Language(s) - English
Resource type - Journals
ISSN - 2307-387X
DOI - 10.1162/tacl_a_00458
Subject(s) - interpretability , computer science , artificial intelligence , inference , generalization , introspection , spurious relationship , machine learning , rule of inference , natural language , natural (archaeology) , overfitting , artificial neural network , cognitive psychology , psychology , mathematical analysis , mathematics , archaeology , history
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the introspective revision algorithm modifies intermediate symbolic reasoning steps to discover reward-earning operations as well as leverages external knowledge to alleviate spurious reasoning and training inefficiency. The framework is supported by properly designed local relation models to avoid input entangling, which helps ensure the interpretability of the proof paths. The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability, compared with previous models on the existing datasets.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom