z-logo
Premium
An efficient triplet‐based algorithm for evidential reasoning
Author(s) -
Bi Yaxin
Publication year - 2008
Publication title -
international journal of intelligent systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.291
H-Index - 87
eISSN - 1098-111X
pISSN - 0884-8173
DOI - 10.1002/int.20278
Subject(s) - evidential reasoning approach , formalism (music) , computation , algorithm , set (abstract data type) , computer science , space (punctuation) , mathematics , dempster–shafer theory , theoretical computer science , artificial intelligence , decision support system , art , musical , business decision mapping , visual arts , programming language , operating system
Linear‐time computational techniques based on the structure of an evidence space have been developed for combining multiple pieces of evidence using Dempster's rule (orthogonal sum), which is available on a number of contending hypotheses. They offer a means of making the computation‐intensive calculations involved more efficient in certain circumstances. Unfortunately, they restrict the orthogonal sum of evidential functions to the dichotomous structure that applies only to elements and their complements. In this paper, we present a novel evidence structure in terms of a triplet and a set of algorithms for evidential reasoning. The merit of this structure is that it divides a set of evidence into three subsets, distinguishing the trivial evidential elements from the important ones —focusing particularly on some elements of an evidence space. It avoids the deficits of the dichotomous structure in representing the preference of evidence and estimating the basic probability assignment of evidence. We have established a formalism for this structure and the general formulae for combining pieces of evidence in the form of the triplet, which have been theoretically and empirically justified. © 2008 Wiley Periodicals, Inc.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here