z-logo
open-access-imgOpen Access
Sentence Similarity Calculation Based on Probabilistic Tolerance Rough Sets
Author(s) -
Ruiteng Yan,
Dong Qiu,
Haihuan Jiang
Publication year - 2021
Publication title -
mathematical problems in engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.262
H-Index - 62
eISSN - 1026-7077
pISSN - 1024-123X
DOI - 10.1155/2021/1635708
Subject(s) - similarity (geometry) , sentence , computer science , probabilistic logic , semantics (computer science) , artificial intelligence , set (abstract data type) , natural language processing , rough set , semantic similarity , task (project management) , image (mathematics) , management , economics , programming language
Sentence similarity calculation is one of the important foundations of natural language processing. The existing sentence similarity calculation measurements are based on either shallow semantics with the limitation of inadequately capturing latent semantics information or deep learning algorithms with the limitation of supervision. In this paper, we improve the traditional tolerance rough set model, with the advantages of lower time complexity and becoming incremental compared to the traditional one. And then we propose a sentence similarity computation model from the perspective of uncertainty of text data based on the probabilistic tolerance rough set model. It has the ability of mining latent semantics information and is unsupervised. Experiments on SICK2014 task and STSbenchmark dataset to calculate sentence similarity identify a significant and efficient performance of our model.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom